id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
5903818 | pes2o/s2orc | v3-fos-license | Gene networks associated with conditional fear in mice identified using a systems genetics approach
Background Our understanding of the genetic basis of learning and memory remains shrouded in mystery. To explore the genetic networks governing the biology of conditional fear, we used a systems genetics approach to analyze a hybrid mouse diversity panel (HMDP) with high mapping resolution. Results A total of 27 behavioral quantitative trait loci were mapped with a false discovery rate of 5%. By integrating fear phenotypes, transcript profiling data from hippocampus and striatum and also genotype information, two gene co-expression networks correlated with context-dependent immobility were identified. We prioritized the key markers and genes in these pathways using intramodular connectivity measures and structural equation modeling. Highly connected genes in the context fear modules included Psmd6, Ube2a and Usp33, suggesting an important role for ubiquitination in learning and memory. In addition, we surveyed the architecture of brain transcript regulation and demonstrated preservation of gene co-expression modules in hippocampus and striatum, while also highlighting important differences. Rps15a, Kif3a, Stard7, 6330503K22RIK, and Plvap were among the individual genes whose transcript abundance were strongly associated with fear phenotypes. Conclusion Application of our multi-faceted mapping strategy permits an increasingly detailed characterization of the genetic networks underlying behavior.
Gene networks associated with conditional fear in mice identified using a systems genetics approach Park et al. Background Advances in both genetic and behavioral techniques are providing unprecedented opportunities for dissecting the gene networks governing behavior. Through a variety of approaches, promising candidate genes have been identified for a wide collection of clinically relevant traits such as anxiety, conditional fear and spatial memory [1][2][3]. Intercrosses and backcrosses have been widely used to identify behavior quantitative trait loci (QTLs) in mice, but suffer from poor mapping resolution. More recently, the use of outbred mice has allowed fine mapping of a range of biological [3] and expression traits [4,5]. However, outbred mice are a fleeting resource and must be regenotyped and re-phenotyped for each study.
In spite of many successes, the recent wave of genome-wide association studies paints an increasingly complex picture of genes underlying behavioral traits. The genetic architecture of most behaviors is widely distributed, with collections of independent loci making relatively small contributions to overall trait variability [6,7]. The largely undefined and likely complex contribution of environmental factors to both the etiology and maintenance of behavior represents another formidable obstacle to reliable QTL mapping.
Recent work has achieved superior resolution using panels of inbred mouse lines [8]. Power can be further improved by incorporating recombinant inbred (RI) strains formed by crossing classical inbred strains followed by repeated sibling mating. One such resource is the hybrid mouse diversity panel (HMDP) which combines inbred and RI lines to create a panel of 100 strains with great resolution and statistical power [9]. The HMDP consists of 29 classical inbred strains supplemented with 71 RI strains derived from C57BL/6J crossed with either DBA/2J, A/J or C3H/HeJ. In addition to enhanced resolution, there are other significant advantages to using the HMDP for genetic mapping. Each strain has been genotyped extensively [10], and multiple individuals can be phenotyped for the same trait, reducing measurement variability. Furthermore, the panel is a renewable resource, since each strain can be propagated indefinitely [11]. Phenotype data can be pooled and shared in an ongoing fashion, while the effects of environmental variables are easily studied.
To leverage these emerging resources, we employed an integrative systems approach to explore the genetics of conditional fear. Figure 1 illustrates the sources of data we collect and how we investigate relationships to identify genetic pathways implicated in the predisposition to fear. Mice were phenotyped on a fear conditioning assay, and the quantitative data combined with single nucleotide polymorphism (SNP) genotypes to map behavioral quantitative trait loci (QTLs). We corrected for the confounding effects of relatedness and population structure between strains using efficient mixed model association (EMMA) [12]. By combining genome-wide expression QTL (eQTL) maps for hippocampus and striatum, weighted gene correlation network analysis (WGCNA) [13,14], and structural equation modeling, we identified single genes and pathways with relationships to fear-driven behavioral phenotypes.
Results
To identify regions of the genome associated with fearrelated behavior, mice from the HMDP were subjected to a fear conditioning procedure and characterized on 48 unique behavioral phenotypes drawn from different test phases. Using these phenotypes as quantitative traits, we performed a genome-wide association study (GWAS) to identify loci associated with each of the behavioral traits. Behavioral QTLs Figure 1 A systems biology approach to dissecting fear biology. Data from behavioral phenotype analysis was integrated with SNP genotypes to map behavioral QTLs. Behavioral phenotypes were also compared to gene co-expression modules created from hippocampus and striatum microarray datasets. Gene expression data and SNP genotypes were used together to map expression QTLs. All three datasets were merged to prioritize mapped genes using Network Edge Orienting. This approach identifies gene networks associated with behavioral phenotypes.
Cued and context fear phenotyping
Mice were tested for cued and contextual fear acquired through a Pavlovian conditioning procedure. Such fear memories manifest across a variety of behavioral dimensions and can be collectively quantified through the use of automated tracking and analysis [15]. Immobility (freezing) is a classical measure of fear triggered by an environmental threat. This species-specific defense response can be reliably acquired in a single conditioning trial, making it a widely used model for fear expression and learning and memory. We also monitored other measures of fear including velocity, thigmotaxis (wall-preference), path shape, and habituation. The fear conditioning assay is depicted schematically in Figure 2A. On day one, a mouse is placed in a cage where an auditory conditional stimulus (CS) tone is played for fifteen seconds followed by a brief foot shock. Training consisted of three tone-shock pairings. The next day, the mouse returned to the same chamber and contextual fear is indexed through a collection of behavioral endpoints including immobility. On the third day, the mouse is placed in a novel chamber and given a series of CS presentations with no foot shock. Cued fear is quantified across the same behavioral endpoints used to assess contextual fear.
Variability in freezing across the panel is shown in Figure 2B. Further testing details for each of the behavioral phenotypes (labeled from B1 to B48) are provided in Additional file 1 (Supplementary methods and Table S1). A cluster dendrogram depicting the similarity between the quantitative behavioral phenotypes across the HMDP is shown in Additional file 1 Figure S1. Surprisingly, context and cue immobility measures clustered closely together although they index different types of learning.
We mapped a highly significant QTL on chromosome 7 for cued immobility (P = 4.40 × 10 -9 ). There are two peak markers for this locus, located~102 kb apart and residing in different linkage disequilibrium blocks (Additional file 1 Figure S3). One peak marker is located within the Tyrosinase (Tyr) gene. Since the HMDP is composed of inbred mouse strains, a number are homozygous for a recessive mutation in Tyr leading to an albino coat color (26 of 94 strains phenotyped).
One study looked directly at the effects of Tyr on cue dependent freezing behavior [16] using both B6 mice with a mutant Tyr allele and an AJ congenic strain with the wildtype B6 allele substituted for the albino Tyr allele. Tyr had only a small influence on fear learning with only minor (if any) learning deficits due to reduced visual acuity [17][18][19] and was one of likely many alleles influencing this phenotype. Interestingly, the second peak has the same P value as the first and lies in the glutamate receptor gene metabotropic 5 (Grm5), which is involved in glutamatergic neurotransmission. Homozygous null mice for Grm5 have been shown to have reduced hippocampal long term potentiation (LTP) [20] and impaired spatial learning [21]. These mice also have a behavioral phenotype associated with a rodent model of schizophrenia [22]. Polymorphism at this locus may contribute to a variance in motor activity as a conditioned response to a tone.
eQTL mapping in hippocampus and striatum
Using gene expression measures of 25,697 transcripts as quantitative traits from tissue from both the hippocampus (98 strains, n = 1) and striatum (96 strains, n = 1), we mapped expression quantitative trait loci (eQTLs) and their corresponding expression SNPs (eSNPs) using EMMA ( [12], see METHODS). For each tissue, we calculated an independent genome-wide significance threshold corresponding to a false discovery rate (FDR or Q value) < 5% [23]. In hippocampus, this threshold was P < 9.21 × 10 -6 while in striatum the corresponding threshold was P < 1.19 × 10 -5 . We separated the eSNPs from each tissue into two separate categories: markers within 2 Mb of the probe start position (termed cis or local) and markers more than 2 Mb away (termed trans or distant). In hippocampus, we mapped 2,128 cis eQTLs, while in striatum we mapped 2,528. There was strong overlap in the cis eQTLs of the two tissues with 1,641 in common (c 2 = 11,831, df = 1, P < 10 -300 ) indicating that transcription regulation due to polymorphism is strongly preserved between tissues. Interestingly, the set of cis eQTLs unique to hippocampus was enriched in genes from the gene ontology (GO) category [24] involved in the "positive regulation of behavior" (Q = 1.8 × 10 -3 ). The top 100 cis eQTLs in each tissue along with locations of their corresponding peak markers and minimum P values are provided in Additional file 1 (Tables S2 and S3).
The presence of a SNP within the 50mer probe sequence of the transcripts interrogated by the microarray might produce spurious false positive cis eQTLs due to a change in binding avidity. To investigate this possibility, we downloaded a list of 8,265,759 known SNPs from the Perlegen SNP Database http://mouse.cs.ucla. edu/mousehapmap and searched for each of these SNPs in the 25,697 probes on the Illumina microarray. Of the SNPs in this list, 3,841 probes contained at least one SNP. In the hippocampus, we observed 535 eQTLs with SNPs while 317 were expected proportionally (χ 2 = 22.0, df = 1, P < 2.7 × 10 -6 ). The striatum also showed slight enrichment with 602 cis eQTLs exhibiting SNPs in probes with 372 expected (χ 2 = 3.0, df = 1, P = 0.08). Although probe SNPs did increase the number of observed cis eQTLs, the proportion was <15%, suggesting that >85% of cis eQTLs do not have evidence of being artifacts due to polymorphism. Of course, other naturally occurring polymorphisms likely exist that are not contained in the Perlegen SNP database and could also lead to false positive associations.
In the hippocampus, we mapped 481,099 trans eSNPs regulating a total of 5,325 unique probes, while in the striatum, we mapped trans 619,418 eSNPs regulating a total of 15,348 unique probes. Using a counting algorithm (METHODS), we estimated these numbers corresponded to a total of 19,876 trans eQTLs in the hippocampus and 60,150 trans eQTLs in the striatum. Genome-wide probe/marker plots for each significant eSNP are provided in the Supplementary materials (Additional file 1 Figures S4 and S5). Selected cis and trans eQTLs from each tissue are shown in Figure 3A -3D.
Comparison of our data with a recent eQTL survey in the hippocampus using heterogeneous stock mice [25] showed significant preservation of cis eQTLs (χ 2 = 1,171, df = 1, P = 1.1 × 10 -256 ), while trans eQTLs did not show significant overlap. This discrepancy could be due to weaker effect sizes for trans eQTLs in general compared to cis or due to differing thresholds for significance. Previous studies also found that trans eQTLs replicated less frequently than cis [26,27]. A recent study of liver using the HMDP [9] found 2,691 cis eQTLs and 3,174 probes with at least one trans eQTL with P < 4.1 × 10 -6 . We detected similar numbers of cis eQTLs but more trans loci, even though the same significance threshold was employed for both types of eQTL. This discrepancy suggests differences in the regulatory networks of hepatic versus neural tissue and may reflect greater transcriptional complexity in the brain.
To survey whether trans gene regulation in hippocampus was similar to that found in the striatum, we compared the probes regulated by each marker across the two tissues. Using a 2 × 2 contingency table, we determined if a probe was regulated by each marker in the hippocampus or not (surpassing a global FDR of 5%) and regulated by the same marker in the striatum or not. There was a significant overlap in the genes regulated by each marker across the tissues (Fisher's Exact Test, df = 1, median omnibus -log 10 (Q) = 4.1), suggesting strong similarities in the regulatory networks of the two tissues. A genome-wide plot of the -log 10 (Q) of the degree of overlap between tissues for genes regulated by each marker between tissues is shown in Figure 3E. Some markers clearly show better preservation of regulated probes than others. For instance a SNP on chromosome 7 at 104.063430 Mb regulates 33 unique genes in the hippocampus and 36 genes in the striatum, with 29 of the genes in common. These hubs may have strong control of expression across different tissues. Despite the significant overlap, differences in regulation are likely important in delineating the cellular disparity between hippocampus and striatum.
Weighted gene correlation network analysis (WGCNA)
We looked at the large scale organization of gene coexpression networks in the hippocampus and striatum microarray datasets. Weighted gene co-expression network analysis is a data reduction method that groups genes into modules in an unsupervised manner based on self-organizing properties of complex systems. These co-expression networks are based on topological overlap between genes while considering the correlation genes have with each other and the degree of shared connections within the network. This method has been used in several recent systems genetics studies to reveal functional gene networks [28,29].
We identified 30 modules in hippocampus containing 39 to 8,445 genes and 25 modules in the striatum containing 34 to 14,582 genes (Additional file 1 Table S4). The largest module in each tissue is the grey module which is reserved for genes that do not separate into any other modules (noise genes). The hippocampus expression data organized into five more modules than the striatum. This finding could reflect a greater cellular heterogeneity of the hippocampus compared to the striatum, as module construction can tease apart patterns of differential expression in mixtures of cell types [30]. There were other differences in co-expression networks between the two tissues. For instance the sienna3 module in the hippocampus was not preserved in striatum. This module was significantly enriched in neuropeptide hormone activity (Q = 6.25 × 10 -6 ) and oxygen binding (Q = 3.68 × 10 -4 ) indicating that these molecular classes may play important roles in hippocampal function.
To evaluate the degree of module conservation across the hippocampus and striatum, we calculated Z scores for preservation of each module using the hippocampus as a reference. The Zsummary statistic encapsulates evidence that a network module is preserved between a reference and a test network based on aspects of withinmodule network density and connectivity patterns [31]. Lower Z.summary.pres scores imply module differences while larger ones indicate preservation. Figure 4 demonstrates that most gene co-expression modules showed some degree of preservation across hippocampus and striatum, with larger modules showing better preservation than smaller ones.
The gene expression properties of each of these modules can be condensed into module eigengenes (MEs) which represent the first principal component of each module [32,33]. By correlating these MEs to behavioral phenotypes, we were able to identify groups of genes with relationships with aspects of conditional fear. Figure 5 shows the correlation of each ME in the hippocampus with the behavioral phenotypes of cued and context immobility (B25 and B44). We focused on hippocampus, as this tissue has been previously implicated in learning, memory, and fear [34].
The context immobility phenotype (B44) showed the strongest correlations with two MEs in the hippocampus: brown (r = -0.43, P = 0.002, Q = 0.07) and darkgrey (r = 0.4, P = 0.005, Q = 0.08). We focus on these two modules for further analysis and annotate them context fear module 1 (CF1) and context fear module 2 (CF2) respectively. Notably, no MEs showed significant correlations with cued immobility (B25) even though cue and context immobility phenotypes clustered together (Additional file 1 Figure S1). This observation is consistent with the biology of cued immobility which relies on the amygdala but is hippocampal dependent [35].
We looked for functional enrichment of specific gene ontologies (GO) in the two selected context fear modules using the program GOEAST, which provides an FDR corrected Q value [36] score for enrichment in each category. The most highly represented ontologies are shown in Additional file 1 Tables S5 and S6. Genes in the intracellular portion of the cell were enriched in both modules (CF1: Q = 1.54 × 10 -16 , CF2: Q = 2.33 × 10 -8 ), as were those involved in the mitochondrion (CF1: Q = 4.38 × 10 -6 , Q = 2.1 × 10 -3 ). By contrast, classes of genes involved in metabolic processes and gene expression were specific to CF1. Genes involved in protein targeting and the rough endoplasmic reticulum were prominent in CF2 but not in CF1. Results of correlations between MEs and all quantified behavioral traits for the hippocampus and striatum are provided in Additional file 1 (Figures S6 and S7).
Genes within each module are prioritized according to their intramodular connectivity (the sum of connection strengths with other genes within the network). Those with a high degree of connectivity are considered hubs and can be viewed as important players in molecular pathways. There was a high correlation between the intramodular connectivity measures of each gene across the hippocampus and striatum (r = 0.53, P < 2.2 × 10 -16 ) indicating strong similarities in the transcriptional networks of these neural tissues.
The gene mitogen-activated protein kinase 1 (Map2k1) was one of the most highly connected genes in CF1 and has been previously implicated in long-term synaptic plasticity and memory [37]. The gene proteasome (prosome, macropain) 26 S subunit, non-ATPase, 6 (Psmd6) acted as another hub in CF1, while in CF2, the genes ubiquitin-conjugating enzyme E2A (Ube2a), nuclear factor I/B (Nfib), and ubiquitin specific peptidase 33 (Usp33) had the strongest intramodular connectivity and served as hubs for this module. These results suggest a role for targeted protein degradation in pathways associated with context dependent fear, consistent with a recent study that showed that synaptic protein degradation through polyubiquitination underlies the destabilization of retrieved fear memory [38]. Other coexpressed genes identified in these modules may also play critical roles in the molecular mechanisms governing learning and memory. Complete details for the gene co-expression network analysis for each tissue and the corresponding measures of intramodular connectivity for each gene can be found in Supplementary materials (Additional file 2).
MEs as quantitative traits
Each module eigengene can be considered a quantitative trait, allowing for mapping of SNPs associated with variation in groups of co-expressed genes. This strategy reveals loci that perturb the expression of gene modules with hopes of uncovering key drivers for traits of physiological relevance [39]. Mapping results that survive a Bonferroni correction for all 101,629 markers are summarized in Table 2. Loci regulating six MEs in the hippocampus were mapped, of which four were preserved in the striatum and two were specific to hippocampus. The first hippocampal specific locus regulated the darkolivegreen module and mapped to a SNP on chromosome 7 within the intron for the gene TEA domain family member 1 (Tead1), a gene known to be associated with transcription factor complexes. This module was enriched in the cellular component flotillin complex (Q = 4.90 × 10 -6 ) and the molecular function calmodulin-dependent protein kinase activity (Q = 4.77 × 10 -5 ). The second hippocampal specific locus regulated the white module and mapped to a SNP on chromosome 1 at 173.121821 Mb. This module consisted of genes involved in the positive regulation of the acute inflammatory response to antigenic stimulus (Q = 4.54 × 10 -5 ).
The module with the strongest association to physiologically relevant GO categories that also possessed regulatory loci for both tissues was the yellowgreen module in the hippocampus (saddlebrown in striatum). This module was enriched in antigen processing and presentation (Q = 1.61 x10 -21 ) and MHC protein complex (Q = 3.10 × 10 -19 ). This module may play a role in synaptic remodeling, as neuronal MHC class I molecules were recently found to regulate synapses in the central nervous system in response to activity [40]. Interestingly, Figure 4 Gene co-expression module preservation across hippocampus and striatum. Modules were constructed separately for each tissue and preservation assessed by Zsummary score using hippocampus modules as reference set. Larger modules tended to be better preserved across tissues. the regulatory locus for this module was identical for hippocampus and striatum. A potential candidate for this locus was flotillin 1 (Flot1), a gene with a cis eQTL in both hippocampus and striatum~24 kb away from this peak marker. This gene product has been found to accumulate in tangle-bearing neurons of Alzheimer's disease [41] and may play a role in learning. In addition, the flotillin complex featured in the darkolivegreen module regulated by a hippocampal locus (above). Other genes in these identified modules should be examined as potential players in the molecular pathways for fear conditioning.
Network edge orienting: prioritizing directed trait networks
To look for relationships between genetic variation, differences in gene expression, and behavioral phenotypes, we employed the Network Edge Orienting (NEO) [42] algorithm. Using SNP markers as causal anchors, NEO assigns directionality to trait networks and provides a way to prioritize genes with expression profiles that are coincident with quantitative behavioral phenotypes (Figure 6A). We performed a NEO single marker analysis on markers with an FDR < 10% in the behavioral QTL mapping. The software uses structural equation modeling to fit five models: causal, reactive, independent, and two confounded models. NEO compares the best fitting model relative to the next best fitting model, yielding a log 10 likelihood ratio, LEO.NB.AtoB, for each significant SNP for each of the behavioral endpoints. Values greater than 0.3 for this score indicate that the causal model fits the input data twice as well as the next best model; a score of 1 indicates a ten-fold better fit. The measure RMSEA.AtoB is an index of model fit, with values < 0.05 representing a good fit. Figure 6B shows the results of NEO analysis in the hippocampus. The results indicate that two SNP markers located on chromosome 7 regulate the expression of two nearby genes on chromosome 7 (630503K22RIK and Rps15a) which in turn influence the immobility of the animals before training (B11: Pre training immobility mean).
Genetic variation at a SNP on chromosome 11 at 51.279205 was also shown to influence the expression of the nearby gene kinesin-like protein 3A (Kif3a) which then contributed to variation in thigmotaxis (B33: Pre cue thigmotaxis mean). Kif3a is a kinesin gene involved in moving axon cargo [43] and has been implicated in amyotrophic lateral sclerosis, a disease involving degeneration of motor neurons [44].
Variation at a SNP on chromosome 2 resulted in a change in expression of the gene START domain-containing 7 (Stard7) which then influenced immobility induced by a novel context (B44 Context immobility). The genes 6330503K22RIK and Kif3a also appear as strong candidates for fear related behavior in the NEO analysis for the striatum (Additional file 1 Figure S8), underscoring the similarity of transcriptional regulation in the two tissues.
Discussion
Fear conditioning provides an opportunity to survey a range of clinically relevant processes including short and long-term memory, context generalization, and memory extinction, making it an efficient tool with which to probe the genetics of fear dependent behavior. To map fear related QTLs, we subjected a population of inbred mouse strains to a standard fear conditioning procedure and follow-up memory tests. We then combined behavioral phenotype data with SNP genotypes and tissue specific gene expression to search for candidate genes and related networks associated with fear phenotypes. Across 48 behavioral endpoints, we mapped a total of 27 QTLs, highlighting the complexity of behavioral regulation and showcasing the value of HMDP for mapping fear loci.
The inbred strains of the HMDP were not randomly selected, but were, in fact, carefully chosen to avoid, insofar as possible, high correlation of non-linked genome segments. Nevertheless, there are some shared segments across the genome due to bottlenecks in the breeding and the history of the strains. EMMA endeavors to correct for these artifacts in the association analysis. However, some caution should be applied to the interpretation of the mapping results, since bias may remain which cannot be overcome by the analysis of the data.
The strongest behavioral QTL in our investigation was for the phenotype cue immobility and had two peak markers on chromosome 7. These markers were located in the adjacent genes Tyr and Grm5 and had identical P values of 4.4 × 10 -9 , yet there were recombination breakpoints between them. Many HMDP strains have mutations in Tyr and are albino, resulting in possibly learning and memory deficits due to decreased visual acuity. However, a study that examined this allele specifically showed that it plays only a minor role in cue immobility and that additional loci are likely to influence fear conditioning [16]. Grm5 is an attractive candidate gene for this locus, since it has previously been shown to be involved in hippocampal LTP.
We surveyed the architecture of transcriptional regulation across two brain regions. We found a smaller number of cis and trans eQTLs in the hippocampus than in the striatum. This diminution may be caused by signal dilution due to the heterogeneous cellular nature of the hippocampus. However we found that the cis and trans eQTLs in the two tissues overlapped significantly, indicating that DNA polymorphism has a robust effect in modulating gene expression across tissues.
By simplifying the gene expression data into modules, we identified groups of genes that are related to fear related behavior. Two such modules in the hippocampus (CF1 and CF2) showed strong correlations with contextdependent fear measures, allowing identification of networks of genes whose co-expression co-varied with fear phenotypes across the HMDP. We assigned priorities to genes within each module based on their level of intramodular connectivity and mapped loci responsible for regulating MEs in both hippocampus and striatum. Cued and context immobility were phenotypically similar as they clustered together in the behavioral dendrogram. However, the two identified modules did not show strong correlations with cued fear, confirming suggesting that the two different types of fear are expressed through different neural and/or molecular pathways.
A hub gene in CF1 (Psmd6) and two of the most highly connected genes in CF2 (Ube2a and Usp33) have been shown to play roles in ubiquitination. Interestingly, others have shown that ubiquitin-mediated proteolysis is involved in initiating long-term stable memory, as both specific removal of specific inhibitory proteins and gene induction are likely to be critical players in fear conditioning [45]. Other components in these modules may be implicated by association in these genetic pathways and provide attractive targets for further investigation.
Structural equation modeling allowed us to identify single markers that influenced the expression of single genes which in turn influence fear related phenotypes. We identified five genes with causal relationships for fear-related phenotypes in the hippocampus and striatum including 6330503K22RIK, Rps15a, Kif3a, Stard7, and Plvap.
Conclusion
In summary, looking at expression patterns in genes and groups of genes in various neural tissues has helped to elucidate the complex molecular networks contributing to fear dependent behavior. While the current approach yielded several potential loci and candidate genes, additional inbred strains would provide increased power for more comprehensive mapping. Next generation sequencing technologies and proteomics should afford even deeper views of genetic polymorphism and expression as we continue to refine gene networks of fear neurobiology.
Mouse population
Male mice from the Mouse Diversity Panel (HMDP) were used for all behavioral analyses. This panel of mice consists of 100 inbred strains comprised of 29 classical inbred strains paired with three sets of RI strains selected for diversity [9]. All mice (n = 700) were obtained through Jackson Laboratory at approximately 55 days old then housed for a 14-day acclimation period prior to testing. Mice were housed in groups (3-4 per cage) under a 12hr/12hr day/night cycle with ad lib access to food and water. All behavioral testing was conducted during the day portion of the cycle, between the hours of 10 AM and 4 PM. Protocols conformed to NIH Care and Use Guidelines and were approved through the UCLA Animal Research Committee. Mice were housed in their covered home cages and placed in an adjacent holding room. Auditory background stimulus in the form of white noise (80db) was delivered through overhead speakers. Previous unpublished observation showed no evidence of orienting response, or any behavioral responses to stimulus presentation while in the holding room [15].
Fear Conditioning
All HMDP strains were exposed to a fear conditioning procedure followed by two independent memory tests. Parameters and procedures were identical to those previously described [15]. On each test day, mice were wheeled to a holding room for a 30 min acclimation period prior to testing.
Behavioral Data Analysis
Behavior was recorded digitally from a camera mounted above each test chamber, then digitized at 15 frames per second with the EthoVision Pro tracking system (Noldus Information Technology). For each mouse a total of 48 unique endpoints were quantified automatically with EthoVision software (Additional file 1 Table S1). Varying numbers of biological replicates were obtained for each strain (ranging from n = 3 to n = 16, mean = 7.3). These measures were designed to characterize multiple dimensions of defensive behavior. The methodology and rationale behind these measures has been discussed previously [15]. Mean performance for each endpoint was determined by either collapsing across the entire test session for context fear endpoints or across specific test phases for fear conditioning (pre-US, post-US) and cued fear test (pre-CS, CS) endpoints. The pre-US period consisted of the 3 minutes prior to the initial CS presentation, while the post-US period encompassed the 4.25 minute interval between the first US presentation and removal from the chamber. Likewise, the pre-CS period spanned the 3 minutes prior to CS presentation, and the CS period covered the 12.5 minute period between the first CS presentation and removal from the chamber. Measures reflecting rate changes were quantified by analyzing time course data within individual test phases.
For the context test, endpoint rate changes were calculated as the percent change from the initial 2 minute epoch to the final 2 minute epoch. For multi-phase tests (training, cued fear test), rate changes were calculated as suppression ratios based on mean values from the relevant test phases (pre/(pre+post)). Strain means were calculated and served as the behavioral phenotypes for downstream analysis. Velocity is the mean rate of movement in any given interval (e.g. cm/s), while mobility is the time spent mobile, expressed as a percentage of total time.
Genotype analysis
The classical inbred and RI strains were genotyped previously [9] by the Broad Institute (classical) and the Wellcome Trust Center for Human Genetics (RI). The genotypes of the RI lines at the Broad SNPs were imputed from the Wellcome Trust genotypes. Only SNPs with a minor allele frequency greater than or equal to 10% were used in the analysis to minimize false positives due to small sample size. All genome coordinates are based on NCBI build 35 (mm7) of the mouse genome.
Behavioral QTL mapping
Using the collected behavioral phenotypes, we performed a genome-wide association test using the software package EMMA (Efficient Mixed-Model Association) [12]. This program calculates P values which quantify the degree of association between each probemarker pair while correcting for confounding effects of population structure and genetic relatedness between strains in the panel. We used a genome-wide Q value threshold of 5% [23] which corresponds to a P value of 4.1 × 10 -6 . To count the number of significant QTL, the genome was divided into bins of 2Mb. If significant markers were found in adjacent bins, markers were combined and counted as a single QTL.
Tissue harvesting
Brains were removed from each animal after euthanasia. Hippocampus and striatum were dissected out and flash frozen in liquid nitrogen. RNA was extracted from each sample using the Qiagen RNeasy kit.
Microarray data collection
Gene expression levels were quantified using Illumina Mouse-Ref 8 v2.0 Expression BeadChip microarrays. The data were normalized using the rank invariant option in the software package BeadStudio (Illumina) [46]. The microarray data are available at the Gene Expression Omnibus (GEO) http://www.ncbi.nlm.nih. gov/geo/ under accession number GSE26500.
Expression quantitative trait loci (eQTL) mapping
Using the marker genotype information from the HMDP and RNA expression data from hippocampus and striatum, we performed a genome-wide association test for each of the 25,697 probes (genes) on the microarray compared to each of the 101,629 SNP markers using the software package EMMA. Markers within 2 Mb of the probe position for each gene were considered cis (local), while those greater than 2 Mb from the probe position were considered trans (distant). Genomewide significance thresholds were determined by calculating the P value corresponding to a Benjamini and Hochberg corrected FDR of 5% [23]. To count the number of significant trans loci, we divided the genome into bins of 2 Mb in width and counted whether or not a marker that surpassed an FDR of 5% was observed in the bin or not. If adjacent bins contained at least one significant marker, the bins were combined together and counted as a single locus.
Gene ontology enrichment analysis
Groups of identified genes were checked for enrichment in gene ontology categories using the package GOEAST [24]. Significance was reported as Q values (P value corrected false discovery rates [36]).
Identification of gene co-expression modules associated with behavioral phenotypes
We used the R package WGCNA [47] to create gene coexpression modules. The input data consisted of gene expression data from the hippocampus (n = 94) and the striatum (n = 94). This program created modules or clusters of highly correlated genes in each tissue separately. For each of the modules, the program produced a module eigengene (ME) which enabled us to find relationships of modules with behavioral phenotypes.
Module preservation
We used the modulePreservation function from the WGCNA library to calculate module preservation statistics [31]. The Zsummary is derived from seven underlying statistics that measure preservation of various aspects of within-module network density and connectivity patterns. The underlying preservation statistics are based on permutation tests and their values represent evidence that a module is significantly better preserved between the reference and test networks than a randomly sampled group of genes of the same size. A Zsummary < 2 indicates no evidence of module preservation, 2 < Zsummary < 10 indicates weak to moderate module preservation, and Zsummary > 10 indicates strong preservation.
Network edge orienting
Markers surpassing a FDR threshold of 10% in the behavioral QTL analysis along with gene expression data for hippocampus and striatum were used as input to the Network Edge Orienting (NEO) software package in R [42]. We selected marker, gene, and phenotype combinations that yielded a LEO, NB.AtoB score > 0.3 and RMSEA.AtoB score < 0.05 for further analysis. Table S1, Classification of quantified behavioral phenotypes; Table S2, Top 100 cis eQTLs in hippocampus; Table S3, Top 100 cis eQTLs in striatum; Table S4, Gene co-expression modules; Table S5, Functional classification for genes in context fear module 1; Table S6, Functional classification for genes in context fear module 2. Supplementary Figures are Figure S1, Cluster dendrogram by behavioral phenotype across HMDP; Figure S2, Mapped locus for cue immobility on chromosome 7; Figure S3, QTL plots for 48 tested behavioral phenotypes after EMMA correction for population structure; Figure S4, Hippocampus eQTLs; Figure S5, Striatum eQTLs; Figure S6, Hippocampus module-trait correlations; Figure S7, Striatum module-trait correlations; Figure S8, Striatum NEO results.
Additional material
Additional file 2: Gene connectivity and module information. Table provides details of gene co-expression network analyses for each tissue and corresponding measures of intramodular connectivity for each gene. | 2016-05-12T22:15:10.714Z | 2011-03-16T00:00:00.000 | {
"year": 2011,
"sha1": "0eb114330ea0b9f4eabde59b5f2d99320ccaf97a",
"oa_license": "CCBY",
"oa_url": "https://bmcsystbiol.biomedcentral.com/track/pdf/10.1186/1752-0509-5-43",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b4fa29e0540b208796112e8c103e17474193db6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
]
} |
258218595 | pes2o/s2orc | v3-fos-license | The effects of mechanical force on fibroblast behavior in cutaneous injury
Wound healing results in the formation of scar tissue which can be associated with functional impairment, psychological stress, and significant socioeconomic cost which exceeds 20 billion dollars annually in the United States alone. Pathologic scarring is often associated with exaggerated action of fibroblasts and subsequent excessive accumulation of extracellular matrix proteins which results in fibrotic thickening of the dermis. In skin wounds, fibroblasts transition to myofibroblasts which contract the wound and contribute to remodeling of the extracellular matrix. Mechanical stress on wounds has long been clinically observed to result in increased pathologic scar formation, and studies over the past decade have begun to uncover the cellular mechanisms that underly this phenomenon. In this article, we will review the investigations which have identified proteins involved in mechano-sensing, such as focal adhesion kinase, as well as other important pathway components that relay the transcriptional effects of mechanical forces, such as RhoA/ROCK, the hippo pathway, YAP/TAZ, and Piezo1. Additionally, we will discuss findings in animal models which show the inhibition of these pathways to promote wound healing, reduce contracture, mitigate scar formation, and restore normal extracellular matrix architecture. Recent advances in single cell RNA sequencing and spatial transcriptomics and the resulting ability to further characterize mechanoresponsive fibroblast subpopulations and the genes that define them will be summarized. Given the importance of mechanical signaling in scar formation, several clinical treatments focused on reducing tension on the wound have been developed and are described here. Finally, we will look toward future research which may reveal novel cellular pathways and deepen our understanding of the pathogenesis of pathologic scarring. The past decade of scientific inquiry has drawn many lines connecting these cellular mechanisms that may lead to a map for the development of transitional treatments for patients on the path to scarless healing.
Introduction
Pathologic skin scarring is a type of fibroproliferative disorder not dissimilar from the fibrotic response that occurs following injury in many other tissue types. This fibrotic response can be protective for tissues, as a means of re-establishing integrity expeditiously following injury. However, efficiency is frequently traded for full restoration of tissue function in fibrotic healing. Though scaring of the skin is a well-recognized and visible example of this phenomenon, fibrotic response to injury exists throughout the human body (1). Idiopathic pulmonary fibrosis in the lungs, hepatic cirrhosis in the liver, and the response in myocardial tissue to ischemic damage are all examples of this process (1)(2)(3)(4). Taken together, fibroproliferative disease accounts for approximately 50% of deaths in United States and the economic cost is estimated to be in the range of tens of billions of dollars (5). Socioeconomically, there is substantial cost of these diseases in terms of disability and impact on quality of life (1).
In the skin, scarring may occur in response to any deep cutaneous injury including trauma, burns, or iatrogenic causes including surgery or radiation therapy. In the developed world, scar-related pathology affects approximately 100 million people (6)(7)(8). Scars can result in functional limitations (i.e., contracture), cosmetic concerns, and affect quality of life by causing pain, pruritus, and psychological distress (6,9).
While superficial wounds may heal without significant scar formation, deep cutaneous injury frequently results in permanent, disfiguring scarring and may cause more significant problems, such as hypertrophic scars or keloids. These two types of pathologic scarring represent examples of abnormal, pathologic wound healing. Pathologic scarring is typically understood as "over-healing," an overcorrection caused by an exaggerated response of normal wound healing pathways (6,9,10). To better define these terms and understand how over-healing occurs, we must first establish the four stages of normal wound healing.
Normal wound healing
The wound healing process is coordinated by interactions between several cell types, systems, and pathways in the body, all with the goal of re-establishing the skin barrier. Though complex, the process is quite precise and controlled in healthy humans. In the process of normal wound healing, four phases have been identified: hemostasis, inflammation, proliferation, and remodeling. These phases interact and overlap substantially, influencing each other and the entire healing process (11,12).
Hemostasis, the first phase of the wound healing process, begins immediately after the wound is incurred. The goal of this stage is to attenuate blood loss. Blood flow is slowed through constriction of the blood vessels and, with the activation of both intrinsic and extrinsic coagulation pathways, ultimately stopped by aggregation of platelets and the formation of a fibrin clot (11). The fibrin clot and surrounding wound tissue will secrete proinflammatory cytokines and several growth factors, including transforming growth factor 1, 2, and 3 (TGF-β1, TGF-β2, TGF-β3), platelet-derived growth factor types A, B, C, and D (PDGF-A, PDGF-B, PDGF-C, and PDGF-D), fibroblast growth factor subtypes 2 and 7 (FGF-2, and FGF-7), and epidermal growth factor (EGF), to attract and promote proliferation of immune cells (12,13).
Inflammation, the second phase, begins once bleeding has ceased. The blood vessels dilate to allow an influx of immune cells, such as M1 macrophages, neutrophils, and lymphocytes into the wound. These cells adhere to the fibrin clot and recruit more inflammatory cells. Neutrophils are responsible for decontaminating the wound by phagocytizing any bacteria and cellular debris found in the wound and releasing reactive oxygen species, producing cytotoxic granules, and placing neutrophil extracellular traps (NETs) (11). Neutrophils additionally amplify inflammation, induce fibroblast proliferation, and direct the adaptive immune response (14). Lymphocytes, like T and B cells, are responsible for fighting off any possible infections and regulating the overall immune response, though the activity of these cells continues into the late proliferation/early remodeling phase (12). Macrophages are involved in many interactions throughout the wound healing process. Initially, pro-inflammatory M1 macrophages are responsible for releasing cytokines that recruit and activate additional leukocytes. Anti-inflammatory M2 macrophages later induce apoptosis in cells that are no longer needed, including other immune cells, allowing for the resolution of the inflammatory phase. As these apoptotic cells are removed, macrophages promote the transition to the proliferation phase by recruiting fibroblasts, keratinocytes, and endothelial cells to begin tissue regeneration (1,12,14).
Proliferation, the third phase, is a continuous process that overlaps with the inflammatory phase. The goal of this phase is re-epithelialization of the wounded tissue (11). With respect to the dermis, fibroblasts and endothelial cells are the primary cell types involved in this process. These cells promote formation of collagen, granulation tissue, and angiogenesis (12). Fibroblasts are responsible for collagen synthesis, as well as the production of proteoglycans and glycosaminoglycans, major components of the extracellular matrix that help to stabilize and form a scaffold for the healing tissue (11,12). Once this foundation has been created, epithelial cells from the wound periphery migrate inwards and layer over the healing wound bed. Concurrently vasculogenesis begins. As the wound begins to mature, collagen fibers are laid down by fibroblasts (11).
Remodeling, the final phase, typically takes months to a year but can continue for years after the initial wounding. The goal of this phase is to return the architecture of the tissue to its unwounded state (12). The new extracellular matrix that was created in the proliferation phase is altered during remodeling. Many of the capillaries formed to deliver blood and cells to newly generated tissue regress and the density of blood vessels decreases to a normal state (11). The collagen that was laid down by fibroblasts is re-organized to better strengthen the healing tissue. Myofibroblasts, or contractile fibroblasts, contract the periphery of the wounded area to reduce the size of the wound (1,12,15). Ultimately, the once functional tissue is transformed into a scar composed mainly of fibroblasts and a collagenous extracellular matrix. A fully healed scar will never be as strong as the original tissue, but can recover up to 80% of the tensile strength of the original (11,15). Remodeling will continue after this scar has formed, organizing and degrading excess collagen in an attempt to return to its original unwounded state (11,15).
Excessive wound healing and pathologic scarring
The typical phases of wound healing often result in effective resolution of superficial damage to cutaneous tissue. However, when wounds involve the dermis, scarring may occur. Two primary types of pathologic scarring are recognized: hypertrophic scars and keloids. Though these two processes are differentiated by their pathogenesis, characteristic features, histological morphology, and clinical treatments, differentiating between the two clinically remains challenging.
Hypertrophic scarring
Hypertrophic scars are clinically defined as erythematous, raised, and often pruritic lesions that are contained within the area of the causative wound. Classically, they appear adjacent to joints in areas of skin which are frequent recipients of tensile force (16,17). The natural history of a hypertrophic scar is rapid growth for the first 4-12 weeks following injury with flattening and regression during the remodeling phase.
Keloids
Keloids may result from diverse types of injury to the skin, including but not limited to perforation, laceration, scratches, insect bites, acne, burns, and iatrogenic injuries from surgery. They appear most frequently in areas under tension, such as the neck, chest, shoulders, upper back, and abdomen. Unlike the contained nature of hypertrophic scars, keloids extrude from the site of the initial injury and involve adjacent tissue (18). This fibroproliferative pathology may occur up to years following an initial insult to the tissue and expand at a slower rate than hypertrophic scars. While hypertrophic scars regress, keloids almost always continue to expand without regression and have been likened to benign tumors of the skin (16,18). Genetic and epigenetic factors appear to play a major role in the pathogenesis of keloids, as they are more common in those with African and Asian ancestry and often run in families (19,20).
Due to the considerable morbidity, patient well-being, and economic toll incurred by pathologic scarring, significant interest in developing effective clinical therapies exists. While early research has focused on elucidating and targeting the biochemical mechanisms contributing to pathologic scarring, recent interest has centered around the relationship between mechanical forces and scarring (5).
Mechanical stimulation influences wound healing
The association between scar formation and mechanical forces was first observed nearly 200 years ago by Guillaume Dupuytren, the French anatomist and surgeon who observed puncture wounds through the skin produce an elliptical wound in 1834. In 1861, Karl Langer, an Austrian anatomist, utilized this technique in conjunction with topographical skin lines to develop the first set of guidelines to dictate the ideal orientation of surgical incisions ( Figure 1) (21). Since that time, over thirty-six guidelines attempting to improve upon Langer's work have been developed, using concepts such as the orientation of underlying muscles and the development of wrinkles in the skin, all with the same goal: to decrease scarring through minimizing anatomical tension across the wound (22).
Recent studies have demonstrated that regional differences in stiffness of the overlying skin may correlate with propensity for scar formation, indicating the existence of anatomically disparate "scar zones" (23). Lack of mechanical load has also been implicated as one contributing factor to the scarless healing phenomenon observed in certain tissue types, such as fetal wounds and oral mucosa (24, 25).
These scar zones may also explain the propensity of keloids to appear at anatomical sites which are subject to increased stiffness and mechanical force, such as the chest and upper back (26, 27). The characteristic shapes of keloids, typically described as a "butterfly," "crab's claw," or "dumbbell," also provide an early suggestion that mechanical force contributes meaningfully to the pathogenesis of this fibroproliferative disorder (28). Computer visual analysis of keloids has since confirmed that the highest tension indeed exists at the hyper-proliferative edges of keloids, rather than the less proliferative center (26). An example of Langer lines, the approximations of tension directionality across the skin surface. These lines have been recapitulated many times and are used to direct the ideal directionality of incisions to minimize tension across a wound, one of the earliest historical responses to the observation that increasing tension aggravates the scarring process. Gurtner et al. demonstrated that cutaneous scar formation could be dictated by controlling the mechanical environment of the wound. In a porcine model, wounds were created and allowed to heal under applied tension, normal tension, and tension-offloading. Tension-offloading resulted in a histologic scar area that was reduced by 6-fold compared to the control state, and 9-fold compared to the applied tension state (29). This study, amongst others, has confirmed in a quantitative manner the qualitative observations regarding scar behavior under tension that have long been observed clinically and anecdotally.
Model systems
As interest has grown in studying the biochemical pathways that underlie transduction of mechanical signals and result in the clinically observed phenomenon of exacerbation of scarring by mechanical force, a number of in vitro and in vivo model systems have been developed. Both of these model systems provide important, though different, lenses to examine the role of fibroblasts behavior in a mechanically dynamic environment. While in vitro systems allow for the examination of single cell types and even single cells at a given time and under highly regulated environments, in vivo systems allow for the study of fibroblasts within the complex and interconnected system of a living organism which more closely mirrors a clinical environment.
In vitro model systems
Fibroblasts, long known to be responsible for fibrotic deposition of excessive extracellular matrix (ECM) in the healing process, were identified early on to adopt a fibroproliferative phenotype following mechanical stimulation. For this reason, most in vitro models of mechanical stimulation focus on this cell type.
Just as our understanding of fibroblasts and their behavior in response to mechanical force has expanded significantly in the last century, so too has our methodology in studying it. Early attempts to do so in vitro involved primitive "hanging drop" culture methods which created mechanical stimulation by establishing tissue cultures in a droplet and subsequently stretching the culture over a silicone rod during the growth period (30,31). Since then, iterative attempts to improve upon in vitro model systems to be more representative of the biological environment of the fibroblast have been proposed and implemented.
Several in vitro models have been produced and studied which include cell types other than fibroblasts, such as Langerhans cells, melanocytes, endothelial cells, and others (32)(33)(34). While the goal of these studies has frequently been to create a cellular environment more similar to that of in vivo skin, a growing body of literature has come to identify that mechanical force results in morphological and biochemical changes to many non-fibroblast cell types as well (34, 35).
The importance of the ECM and the physical framework in which cells exist were recognized early in the study of mechanical force on fibroblast behavior. Developmental biologist Paul Weiss created experiments utilizing fibroblasts cultured in plasma clot and applied tension to the fibrin to examine resulting morphological changes in the fibroblasts (36). The discovery that fibroblasts could be studied in acid-solubilized collagen gave rise to the movement from two-dimensional to three-dimensional culture environments (37). Fibroblastpopulated collagen lattices (FPLCs), which were initially developed to treat burns, have gained popularity as a model to study fibroblast integrin-mediated interactions with ECM substrate and associated paracrine biochemical signaling in three dimensions (5). These latices are seeded with fibroblasts and used to create either "free-floating" models, where the cells and their matrix float without adhesion to the surrounding experimental structures, or "rigid" models, where the matrix and fibroblasts are cast onto a fixed surface (38). Recent efforts to improve these models have focused on modulation of matrix porosity, stiffness, and adhesion domains to accurately represent the native fibroblast environment (37,39,40).
Computer-automated servohydraulic or vacuum-type stretch apparatuses allow for uniform frequency and amplitude of tension application to culture materials (38,(41)(42)(43). These apparatuses allow for specific modulation of parameters such as strain or compression magnitude, orientation, and kinetics (38). Traction is transduced by these apparatuses onto rings or two opposing bars which surround the matrix to which the fibroblasts are adhered (44-46). To apply mechanical force to individual cells, microneedles have been mounted on these apparatuses and inserted into the culture substrate in close proximity to fibroblast cells (47, 48). Use of "optical tweezers," which consist of coherent light beams, have been used to trap, manipulate, and apply force to cells in a non-contact manner. These protocols have allowed for study of application of force to individual cells and study of mechanotransduction on a unicellular level (49-51). The development and implementation of atomic force microscopy has allowed for high-resolution and fluorescent live visualization of cells as they are stretched and undergo mechanical force (52)(53)(54).
In vivo model systems
While in vitro systems have been the basis for a substantial portion of scientific literature to date examining the role of mechanical force on fibroblast behavior, they are limited in ability to replicate three-dimensional tension environments and biochemical crosstalk found in living tissue. In vivo models allow for a more holistic view of fibroblast activity within a dynamic tissue environment and represent a critical step in the scientific process of translating basic science findings into clinically applicable treatments.
Small animal models are a common first step for in vivo studies related to skin disease and offer the entry-level opportunity to study fibroblast activity in mammalian skin. While mouse skin lies more loosely than human skin, allowing for less scarring, it is genetically dissectible. To create a better analogue to human scarring in a mouse model, Arabi et al. developed a murine model of hypertrophic scarring that utilizes biomechanical loading devices that can be placed on opposite sides of a wound and distracted incrementally to apply continuous tension across the wound throughout the healing process. The scars resulting from this protocol were found to be histologically comparable to human hypertrophic scars (55). Similarly, Chin et al. developed a model using a computer-controlled device that distracts skin in a murine model cyclically to produce the effect of repetitive mechanical force on an area (56).
Porcine models provide several advantages over small animal models, such as higher anatomical fidelity to human skin compared to murine models. Additionally, porcine models provide scale and mechanical forces across the skin more similar to those found in humans. A hypertrophic scar model in pigs has been developed in which full-thickness, elliptical, and hexagonal excisions are created and put under tension by a loadbearing polymer device. This model produces hypertrophic cutaneous scars with histological morphology comparable to those found in human skin (57). Advances in these model systems have allowed for great progress to be made in our understanding of the role mechanical forces play in fibroblast behavior and the wound healing process.
Fibroblasts and myofibroblasts respond to and transduce tension
Dermal fibroblasts have long been implicated as the cells responding to mechanical force and affecting the clinical changes observed in scars under tension (5). These highly mechanosensitive cells have been found to respond to mechanical strain by increasing expression of proinflammatory and profibrotic genes, proliferating, migrating, and differentiating into myofibroblasts (5,41,(58)(59)(60). While many pathways that contribute to fibroblast mechanotransduction in the skin are still in the process of being discovered and described, the past two decades have yielded increased depth of understanding into the cellular structures, genes, and signaling pathways by which fibroblasts respond to mechanical force and alter the wound environment (28).
In addition to responding to external mechanical force, fibroblasts have been shown to create mechanical forces within the wound to contract and assist with closure during the healing process. These mechanical forces are termed cell traction forces (CTFs) and are generated by the cytoskeleton of the cell (61, 62). This process is thought to underly the so-called "purse-string" healing model, mediated in adults by myofibroblasts formation of late-stage granulation tissue, which imposes contractile forces to pull the wound closed. Fibroblasts and myofibroblasts impose this intracellular tension through the sliding of ATP-powered actin-myosin filaments which is then transmitted to the ECM through focal adhesions at either end stress fibers. Actin polymerization at the leading edge of a moving cell serves as another means of CTF generation (63). The cellular motion associated with the creation of CTFs are relatively slow, sustained, and irreversible when compared to the calciumregulated contraction of myocytes (64). A number of associated molecules regulate this process, including myosin light chain kinase (MLCK), MLC phosphatase, Rho, Rac, and α-SMA. It is through the generation of CTFs that fibroblasts and myofibroblasts are able to migrate within the ECM, generate stress and strain within tissue, maintain cellular tensional homeostasis, and ultimately draw wound edges together (64).
In normal wound healing, myofibroblasts facilitate contraction of the granulation tissue and then undergo normal apoptotic cell death. Normal wound healing concludes when the wound bed is re-epithelialized by keratinocytes which migrate to the wound edges via lamellipodia (65, 66). However, in cases of pathologic scarring, prolonged myofibroblast survival contributes to the aberrant fibrotic pathology (67-69). In fact, mechanical load has been shown to result in a four-fold decrease in myofibroblast apoptosis, which in turn results in a hypertrophic scar phenotype. This mechanism, in part, is thought to explain the cellular changes that underlie the pathogenesis of pathologic scarring (55).
Mechanotransduction and biotensegrity
Although the concept of mechanical tension on wounds exacerbating scar formation has long been acknowledged, only recently have scientific studies allowed the cellular and physiological causes underlying this phenomenon to be uncovered. As this field of inquiry has emerged, the term "mechanotransduction" has been coined, referring to the methods by which mechanical forces are translated into biochemical signals (5).
In the framework of mechanotransduction, "mechanosensing cells" or "sensor cells" have the ability to recognize mechanical cues from the environment such as force, stress, strain, rigidity, and adhesiveness. Conversely, "mechanotransducing cells" or "effector cells" contain proteins or protein complexes that can produce or potentiate a chemical signal in response to mechanical stimulation (70).
Organs, tissues, cells, and sub-cellular components are understood to exist within a stabilizing and compressing framework which have broadly become known as the biotensegrity system. Coming from the architechtural term,"tensegrity," biotensegrity refers to biological systems that are both stabilized by continuous tension and discontinuous compression. From a tissue level to a sub-cellular level, this framework allows mechanical forces to be relayed and therefore acts as an important mechanism in mechanotransduction (71). This framework is embodied by the ECM at a tissue level, the cytoplasm on a cellular level, and the karyoskeleton within the nucleus ( Figure 2) (71, 72).
These concepts of mechanotransduction and biotensegrity work in tandem to allow for mechanical action to propagate to Berry et al. 10.3389/fsurg.2023.1167067 Frontiers in Surgery the cellular and subcellular level. While mechanical forces on the skin can originate at the tissue level, such as stretch initiated by muscles, these forces can also be mediated at the cellular level through cellular adhesions and cytoskeletal connections. Regardless of origin, external mechanical signals can be sensed by transmembrane cell surface adhesion receptors such as integrins, cadherins, or mechanosensitive ion channels which allow for transfer of mechanical stimulation across the cell plasma membrane. These mechanical forces are then able to be translated to intra-cytoplasmic biochemical signals by various means including cell focal adhesion complexes and intercellular adhesion complexes. Biochemical signals mediate downstream signaling cascades, and transcriptional effects in the nucleus. Therefore, the process of mechanotransduction can cause up-or down-regulation of second messengers and subsequently affect cell processes such as migration, growth, proliferation, and matrix remodeling (73). Signaling from mechanosensitive cells to mechanoresponsive cells results in the activation, suppression, and modulation of pathways key to tissue-level processes such as wound healing (74). Various levels of biotensegrity, the structural principle which demonstrates how tissue, extracellular space, cell cytoplasm, and the nucleus are stabilized by continuous tension with discontinuous compression. In descending order, skin organ-level architecture, followed by the tissue-level dermal matrix, the cellular-level cytoskeleton, and finally the nucleus-level karyoskeleton. It is through these structural systems that mechanical forces are relayed to the cellular and nuclear level and one of the pathways through which mechanotransduction is thought to be facilitated. Mechanotransduction is understood to occur at a spatial distance when initial force acts across specific cytoskeletal filaments and the communication of this stimulation is known to depend on the stiffness differential between cellular structural components. Therefore, cytoskeletal prestress may in part determine the pace and fidelity of intracellular response to external mechanical signals. As these forces are transduced to the nuclear level, changes in the shape and kinetics of load-bearing molecules can result in epigenetic, transcriptional, and protein processing changes (75). These downstream effects of mechanical stimulation are highly relevant to cellular behavior, allowing for mechanotransduction to control cellular physiology and coordinate tissue-level impact (74,76).
Modern developments in basic science techniques and computational ability have allowed for further characterization of fibroblast subpopulations which has yielded interesting and important insights into the behavior of fibroblasts in a cutaneous wound environment. For the past several years, fibroblast heterogeneity and interest in characterization of fibroblast subpopulations has grown. Foster et al. published a 2021 article detailing a multimodal-omics approach to study fibroblasts response to wound healing, using methods such as transgenic rainbow mouse lineage tracing, bulk transcriptomic analysis, and single cell RNA transcriptomic analysis. This study found that local fibroblasts proliferate in a linear, polyclonal manner along the cross-sectional wound interface (77). Foster et al. demonstrated that local fibroblasts migrate to the wound site following injury to the region with specific functional subpolulations involved in different processes. In particular, a mechanofibrotic subpopulation of fibroblasts exists on the outskirts of the wound, characterized by markers including Engrailed-1, COL1A1, TGFβ-2, and JUN. One week after the initial wound, these mechanofibrotic fibroblasts begin to proliferate in response to mechanical force and migrate to the center of the wound. By two weeks, fibroblasts maintained a fibrotic state in the scar microenvironment which was sustained by inflammatory signaling despite closure and epithelialization of the wound.
Cellular mechanisms of mechanotransduction Extracellular mechanisms and integrins
Cells exist within a protein-rich scaffold known as the ECM, which provides structural support and mediates connections between cells. The ECM plays important roles in the wound healing process, including modulation of biochemical signaling pathways and regulation of the proliferation, migration, and survival of cells existing within it (78). While the ECM is made up of over 300 protein and polysaccharide types, the most frequently found are collagen and elastin, which are morphologically string-like and attach to cells via transmembrane heterodimeric receptors known as integrins (79, 80). These proteins are fundamental to transmission of signals from outside of the cell inwards and vice versa.
Integrins connect the ECM to the intracellular actin cytoskeleton and are important components of focal adhesions, which unsurprisingly have long been an area of interest for researchers interested in mechanotransduction (5). Focal adhesions are composite structures of several proteins which attach the intracellular cytoskeleton to the extracellular matrix. When a mechanical force acts upon the extracellular matrix, these complexes mediate the propagation of the mechanical signal into downstream biochemical pathways. These biochemical pathways ultimately result in the migration of inflammatory cells and keratinocytes, initiation of angiogenesis, and increase in collagen synthesis that causes scarring (81).
Transforming growth factor Beta (TGF-β)
Transforming growth factor β (TGF-β), a multifunctional growth factor and one of the most well-described pro-fibrotic cytokines, works in multiple pathways to translate mechanical stimulation to fibrotic pathology. There are three primary isoforms of TGF-β: TGF-β 1 , TGF-β 2 , and TGF-β 3 , all of which contain a heteromeric receptor complex of type I and II receptor serine/threonine kinases (82). TGF-β first binds with the TGF-β receptor 2 (TGF-βR2) before the signal is propagated downstream through Smad family proteins, which can be categorized into receptor-regulated Smads or R-Smads (Smads 1, 2, 3, 5, and 8), common-partner Smads or Co-Smads (Smad 4), and inhibitory Smads or I-Smads (Smads 6 and 7). R-Smads for a hetrodimeric complex with Co-Smads before translocating to the nucleus to act as a transcription factor. Classic transcriptional targets of the TGF-β pathway have pro-fibrotic effects such as the induction of excessive collagen production and the initiation of fibroblast transition to myofibroblasts (83-85).
TGF-β plays a central role in fibroblast mechanotransduction pathways. In the fibroblasts of hypertrophic scars, the autocrine production and activation of TGF-β results in the development and stabilization of large focal adhesions and upregulates myofibroblast contractility which are thought to contribute to the excessive contraction of wounds seen in this pathology (86, 87). TGF-β has also been shown to upregulate the fibroblast contractile markers α-smooth muscle actin (α-SMA), cofilin, and profilin in myofibroblasts in a dose-dependent manner (88, 89). TGF-β has been shown to be released and activated from its reservoir in the ECM due to integrin stimulation by mechanical force. However, Wipff et al. demonstrated an interesting effect of myofibroblast contraction to activate TGF-β from the ECM reservoir. Therefore, both external mechanical force and intrinsic contraction of myofibroblasts (which is itself stimulated by TGFβ) can result in the release and activation of TGF-β (90).
Focal adhesion kinase (FAK)
Focal adhesion kinase (FAK) was first recognized in 1992 as a non-receptor tyrosine-phosphorylated protein which localizes to focal adhesions and has quickly risen in interest as a component Berry et al. 10.3389/fsurg.2023.1167067 Frontiers in Surgery of mechanotransduction and a possible target in its inhibition. Several binding sites specific to focal adhesion proteins are present in the c-terminal domain of FAK that associates with integrin clusters. Integrin-dependent autophosphorylation of FAK at the Tyr-397 site and others is thought to activate kinases in the Src family which in turn initiate downstream signaling (47). FAK's role was first characterized in relationship to cell motility, and in the early 2000s, in vitro studies began to explore the potential role of FAK in mechanotransduction. Wang et al. demonstrated that FAK-null fibroblasts showed impaired response to mechanical input during migration (47). Other studies have reported FAK phosphorylation followed by mitogenactivated protein kinase (MAPK) activation can be induced by uniaxial cyclic stretching of fibroblasts and resulted in fibroblast proliferation (41,91). FAK has also been implicated in mechanotransduction pathways sensing shear stress in the endothelial cells of vasculature (92, 93).
In 2011, Wong et al. reported that following cutaneous injury, FAK is activated in a pathway potentiated by mechanical stimulation. In a murine model of hypertrophic scar formation, this study showed that FAK-knockout mice formed scars with less inflammation and fibrosis than control mice (38). Additionally, this research established extracellular-related kinase (ERK) as an important mediator of FAK. When wounds are under tension, ERK mediates the excessive production of collagen and triggers the release of the chemokine monocyte chemoattractant protein-1 (MCP-1). MCP-1 knockout mice formed minimal scars, and small molecule inhibition of FAK in human cells also reduced scar formation and attenuated MCP-1 chemokine signaling in vivo (94). This research established the role of the FAK-ERP-MCP-1 pathway as a key player in mechanotransduction and represented the beginning of identification of specific biochemical targets for uncoupling mechanical force from biochemical stimulation of pathologic scarring.
In 2022, Chen et al. published research utilizing these findings in a porcine model of split-thickness skin-grafting (STSG), a common intervention for deep tissue injuries that is also associated with causing contractures and scarring (95). Using single cell RNA analysis, the group found that STSGs indeed cause upregulation of proinflammatory and mechanotransducive pathways as would be expected in scar formation. A FAK inhibitor was applied to this model, and found to promote healing, reduce contracture, mitigate scar formation, restore collagen architecture, and improve graft biomechanical properties. Single cell RNA analysis indicated that application of a FAK inhibitor up-regulated myeloid CXCL10mediated anti-inflammatory effects and decreased CXCL14mediated chemokine action and fibroblast migration. Mechanical force was found to increase fibroblast transcription of pro-fibrotic genes at later timepoints and interruption of mechanical stimulation by FAK inhibition resulted in a shift toward pro-regenerative fibroblast states that typically characterize unwounded skin (95).
Rhoa and rho-associated kinase (ROCK)
Perhaps the best described target of FAK signaling, the Rho family of GTPases have been demonstrated to influence fibroblast behavior such as tension, motility, intercellular-adherence, cytoskeletal dynamics, and differentiation into myofibroblasts. In a murine model of cardiac injury, a knockout of Rhoassociated kinase-1 (ROCK-1) resulted in lower levels of myofibroblast transition in response to ischemia (96). Cyclic mechanical tension has been shown to activate RhoA and cause induction of ROCK-dependent actin assembly, while the cytoskeleton was relaxed with use of a ROCK inhibitor (59). RhoGTPases also appear to have a connection to effectors of the Hippo pathway, which has been connected strongly to cell proliferation, apoptosis, differentiation, and malignant transformation (97,98).
Hippo pathway: yes-associated protein (YAP) and transcriptional coactivator with PDZ-binding motif (TAZ) In 2011, Dupont et al. identified that the two main downstream effectors of the Hippo pathway, Yes-associated protein (YAP) and transcriptional coactivator with PDZbinding motif (TAZ), are involved in nuclear transduction of mechanical signals occurring in response to changes in ECM rigidity and shape (99). This pathway was found to be Rho GTPase-dependent and required tension on the actomyosin cytoskeleton but was found to occur regardless of Hippo signaling. Additionally, this study implicated YAP and TAZ as important regulators of cellular differentiation, survival, and regeneration based on ECM stiffness (99, 100).
Lee et al. expanded on this finding withing wound healing in a 2014 murine model of cutaneous scarring. YAP was found to localize to the nucleus of dermal cells at 2-and 7-days following wounding. TAZ, which normally localizes to the cytoplasm of dermal cells, localized to the nucleus as well one day following wounding. In YAP/TAZ knockdowns, the rate of wound closure was markedly slowed and TGF-β expression was reduced. Additionally, Mascharak et al. demonstrated YAP inhibition to alter fibroblast behavior preventing adaption of a more profibrotic phenotype and resulting in regeneration of skin following wounding as opposed to scar formation (101). The mechanosensitive proteins YAP and TAZ have been found to modulate many molecules important for the development of fibroproliferative diseases such as scarring, including proteins in the TGF-β signaling pathway, such as Smad-2, Smad-7, and p21, as well as connective tissue growth factor (CTGF) and transglutaminase-2 (99,102). The results of this study support that the Hippo pathway, and particularly YAP and TAZ, is essential for dermal wound healing (103).
YAP inhibition has been shown to block activation of engrailed-1 expression, leaving fibroblasts in a regenerative state to promote wound regeneration (104). Multiomic analyses have further elucidated these divergent molecular trajectories, with inhibition of YAP mechanotransduction to drive fibroblast mediated regenerative repair through TRPS1 and Wnt activation (101).
Wnt-β-Catenin
Notably, connections have been drawn between the Hippo pathway and components of the canonical Wnt-β-Catenin pathway (105). A 2002 study initially linked β-catenin to fibroblast activity in cutaneous wounds, when Cheon et al. showed that the core component of the cadherin protein complex stabilized fibroblast proliferation, motility, and invasiveness in cutaneous wounds. Transgenic mice with elevated β-catenin activity developed hyperplastic scars following cutaneous wounding (106). Since then, several studies have shown that β-catenin is mechanoresponsive and mediates myofibroblast differentiation, though the significance of this pathway to cutaneous wound healing has yet to be elucidated fully (107, 108).
Calcium channel Ion pathways
Calcium channels are one of several ion channel types that allow for maintenance of an electrical and chemical gradient across the plasma membrane of cells. Creation of these gradients and subsequent depolarization of ions across them can result in second messenger activation and a variety of downstream effects. Because certain types of calcium ion channels can be activated by mechanical force, they provide a means of mechanotransduction of signals from the outside of the environment to the intracellular space (109). This has been shown experimentally to occur in fibroblasts, with intracellular levels of calcium elevating in response to application of hydraulic pressure or stretch (48, 110). The action of these channels have been shown to be intimately related to the physical connections between integrin and the intercellular actin cytoskeleton, indicating important relationships between calcium homeostasis and cellular mechanical sensing (74).
One theory of the mechanism behind coordinated myofibroblast contraction facilitating wound closure postulates that signaling through adherens junctions to nearby cells may result in the opening of mechanosensitive ion channels and a Ca 2+ influx (74,111).
Piezo1
In 2010, Coste et al. identified the two proteins Piezo1 and Piezo2 as significant components of mechanically activated cation channels (112). Following this discovery, a group of subsequent studies sought to characterize the role of these proteins. Nourse and Pathak demonstrated in 2017 that Piezo1interacts with the cell cytoskeleton, and in 2021 He et al. found that myofibroblasts in hypertrophic scar tissue overexpress Piezo1 (109). An in vitro model of mechanical stretch was shown to increase Piezo1 expression and calcium influx to human dermal fibroblasts, mediated by Piezo1. This model also demonstrated that Piezo1 activation promoted human dermal fibroblast proliferation, migration, and altered response to mechanical force, and Piezo1 inhibitor GsMTx4 was injected intradermally into rats, they were protected from mechanical force-induced hypertrophic scar formation ( Figure 3) (113).
Pathogenesis of keloids
As basic science techniques have continued to characterize the action of fibroblasts in the wound microenvironment, researchers have asked how these findings might apply to the case of keloids. Keloids are, as has been described above, one of the clearest clinical examples of tension affecting scar formation, and many have hypothesized that fibroblast mechanics might play a critical role in this process. Keloids have been characterized as a uniquely human example of pathologic scarring with various causal factors, though genetic factors appear to be the most influential. Several studies have examined the roles of apoptosis inhibition, nutritional factors, sebum secretion, chronic inflammation, and neurogenic inflammation as factors contributing to keloid pathogenesis (28).
Unsurprisingly, mechanical forces on fibroblasts have indeed been found to act significantly in keloid pathogenesis. When fibroblasts present in unwounded skin are compared to those in keloid scars, stiffness and force generated by the cell's actin filaments increased (114). These findings are hypothesized to contribute to the ability of keloid fibroblasts to migrate outside of the original wound limits. A 2018 study by Hsu et al. identified that decreased expression of caveolin-1, a cellular membrane protein, could contribute to increased flexibility of the membrane and resulting aberrant responsiveness of keloid fibroblasts to mechanical stimulation. Perhaps for this reason, keloid fibroblasts were found to produce excessive levels of pro-fibrotic cytokines and ECM when cultured on a mechanically stiff substrate. This action was mediated by nuclear translocation of Runx2, which is a transcription factor related to osteogenesis (115). Keloid fibroblasts have also been shown by Wang et al. to produce higher levels of TGF-β1, TGF-β2, and collagen 1a at both the transcriptional and translational levels when exposed to equiaxial strain. Additionally, keloid fibroblasts generate more focal adhesion complexes and demonstrate increased activation of FAK when exposed to mechanical stimulation (116).
The past two decades of research have expanded the body of knowledge related to keloid pathogenesis and the causes of pathologic scarring generally. As these studies have continued to implicate mechanical forces as an important mediator of these pathologies, translational clinical treatments have begun to emerge with this target in mind.
Current clinical options for treatment and prevention of pathologic scarring
The body of research detailed above describes how mechanical force on the wound contributes to formation of scar tissue as wounds heal. With the goal of translating this important scientific finding to clinical medicine, several clinical
Botulinum toxin A
Derived from the bacteria Clostridium botulinum, botulinum toxin A provides neurotoxic effects which can halt neuromuscular transmission and therefore decrease tension applied to skin by underlying muscular movement. Due to the ability of botulinum toxin A to paralyze the muscle and therefore decrease tension on the edges of wounds, interest has arisen in the ability of this agent to treat pathologic scars that are known to be exacerbated by mechanical forces, such as keloids and hypertrophic scars (57). Indeed, studies have demonstrated that treatment of muscular structures surrounding a wound with botulinum toxin A decreases fibroblast proliferation and ultimately leads to decreased expression of TGF-β1 (117). Clinically, the application of botulinum toxin A to wounds and the surrounding area, typically via intralesional injection, has been reported to reduce scar formation. A recent meta-analysis demonstrated that keloids and hypertrophic scars treated with botulinum toxin A appeared visually less noticeable than those treated with corticosteroid or placebo (118). Prospective clinical studies have reported elevated levels of patient satisfaction with botulinum toxin A treatment, as well as decreases in pain, pruritis, tenderness, and scar volume (7,(118)(119)(120).
Silicone gel sheeting
External dressings placed on wounds during healing can help limit mechanical forces like stretching. Silicone gel sheeting, in particular, has been used since the 1980s in the clinical treatment of hypertrophic scars and keloids. Clinical studies have shown visible improvement in scars with silicone gel sheet application and analytical software has demonstrated that this material facilitates the transfer of tension from the wound to surrounding normal skin (15,121,122). However, some attribute these positive clinical outcomes to the silicon gel's ability to hydrate FIGURE 3 Current understanding of cellular signaling pathways related to mechanotransduction in fibroblasts. Berry et al. 10.3389/fsurg.2023.1167067 Frontiers in Surgery the stratum corneum and subsequently prevent fibroblast proliferation and collagen deposition (122, 123). Regardless of mechanism, silicone gel and other stabilization dressings remain a popular clinical prophylactic for scar formation, due to consistently favorable performance in randomized control clinical trials and exhibition of only mild side effects such as skin irritation (15,124).
Tapes
In the context of hypertrophic scarring, taping has been used clinically to aid in opposing the edges of a fresh wound during closure, but recent interest in the usage of tape to reduce tension across the incision during healing has risen. Types of tapes used clinically range in attributes, including non-stretch, paper, and elastic (high-stretch) varieties.
Non-stretch tape
Current literature has demonstrated the most support for nonstretch tapes, such as Blenderm TM in the prevention of scarring following wounds. 2021 systematic review found that non-stretch tapes reduced the height, width, color, associated itching, and gross visual score associated with scars when implemented early in the treatment course (125). When implemented at later timepoints, non-stretch tapes were found to have a high level of evidence for improving thickness, pliability, softness, and color of scars (125,126). Non-stretch tapes have also been implemented, with mixed results, in the treatment of burns and for use in scar hydration (125,127).
Paper tape
Paper tapes, such as Steri-Strips TM and Micropore TM , have long been used clinically, particularly in the early stages of wound management. Indeed, studies suggest that these materials prevent hypertrophic scarring when implemented in early treatment, though some studies suggest that hypertrophic scarring or scar stretching may occur after the removal of paper tape 12 weeks following injury (128,129). One paper found that use of paper tape reduced pain, itch, thickness, and elevation of the scar 12 months following initial wound. Generally, paper tapes have been found to have less efficacy when applied during the remodeling phase of wound healing or on mature scars, though have been reported to demonstrate subjective improvements in scar coloration, thickness, and elasticity (125,126).
Elastic tape
Elastic-type tapes such as Kinesio tape are infrequently used in the acute treatment setting, but have been reported in a single case study to produce an improvement in color, pliability, and elasticity in the late treatment of hypertrophic scarring (125). Additionally, 70% of patients reported improved satisfaction with the appearance of their previously untreated mature scars when elastic tape and zero stretch was applied (130, 131).
Oyster splints
Keloids classically appear often on the lobes of the ears following trauma to the area, creating a functionally and aesthetically undesirable result. The oyster splint was developed to treat keloids of the ear in 1983 by applying compression to the area subsequent to surgical correction of the keloid (132). As the ear presents a challenge to apply compression to due to topographical irregularities and anatomical heterogeneity, the oyster splint requires a mold of the area to be made and compressive pressure is then applied to a splint in the unique shape of the patient's wound and ear. The oyster splint reduces tissue metabolism and fibroblast proliferation, likely related to pressure-offloading from the wound margins (133). Case reports have suggested that this technique results in favorable functional and aesthetic clinical outcomes compared to scar revision surgery alone (132, 133).
Embrace device
Designed with the specific goal of offloading tension from a wound and subsequently improving aesthetic and functional outcomes following scar revision surgery, the Embrace device is a pre-strained silicone elastomeric dressing which is adhered to the skin with pressure-sensitive silicone adhesive. A clinical study where this device was applied to one half of a scar revision incision in 10 patients found a highly significant improvement in scar appearance with treatment, suggesting promise for such technologies in treating pathologic scarring (134). A second randomized control trial treating one half of an abdominoplasty closure using the embrace device and the other half with the surgeon's standard of care also found significantly reduced scarring (135). Future opportunities involving this device may include customizing usage and design of the device for specific anatomical areas and incisions.
Incisional negative pressure wound therapy (iNPWT)
Use of iNPWT has been implemented previously to prevent infection and dehiscence, a 2022 study investigated the use of this technology to study the effect on scarring. The hypothesis behind this work considered that iNPWT may increase lateral tension across the incision, ultimately decreasing mechanical pulling forces that contribute to hypertrophic scar formation. In incisions following gender-affirming mastectomies, one randomized side was treated with iNPWT while the other was treated with Steri-Strips TM . While objective quantitative results were not remarkably different between the two groups, iNPWT resulted in improved patient satisfaction results on SCAR-Q and the PSOAS Observer scale (136).
Sutures
To reduce the risk of pathological scar formation, intentional use of suture types and placement which reduce tension across a surgical wound are preferrable. Specifically, attention is often paid to tension across the wound dermis, given evidence of the importance of this layer in the formation of keloids and hypertrophic scars (6). This logic, in part, has helped popularize the technique of using subcutaneous tensile reduction sutures which intentionally displace tension from the dermis to the deep fascia and superficial fascia layers (5).
Scar revision techniques to reduce tension
When mature scars occur that result in functional or aesthetic concerns for a patient, the traditional treatment is surgical scar revision. Importantly, the remodeling phase of wound healing can continue for one or more years after initial injury and therefore surgical intervention should be considered after this timepoint, as scars may regress naturally. The goal of these procedures is to alleviate functional limitations caused by the scar tissue or improve the visual appearance by removal of scar tissue or utilizing anatomical geometries to reframe the scar and decrease noticeability. Surgical intervention is effective for treating hypertrophic scars, which recur at low rates, but should be used conservatively in the treatment of keloids, which recur at rates estimated at 45%-100% (137-139). A critical pillar of scar revision is tension-free closure, which can be facilitated by surgical technique (6,140). The z-plasty technique, heralded for its ability to lengthen contracted scars and align scars more smoothly with skin tension lines, has been popularized in the setting of scar revision (5,141,142). In some cases, local flaps can be utilized to decrease tension on the wound and ultimately assist in resolving pathologic scarring (143).
Conclusion
Scarring of the skin poses a complicated clinical problem with significant implications for patient functional and psychological outcomes and with large socioeconomic impact. It has long been recognized that tension and force across a wound affects scar formation, particularly in the context of pathologic scarring. Therefore, scientific endeavors have long attempted to better characterize the process of mechanotransduction and how mechanical force is translated to changes in cellular biochemical signaling, in cutaneous wounding. Using a variety of inventive in vitro and in vivo models, scientists have demonstrated the importance of the fibroblast as a critical cell in mechanosensitivity and mechanoresponsiveness. Several proteins, transcriptional cofactors, and signaling pathways, have been characterized in connection to mechanotransduction in cutaneous injury, and novel technologies in cellular biology have allowed for more granular understanding of these interactions and fibroblast behavior in response to mechanical stress than ever before.
When a wounded area is under tension, a larger scar is known to form. The larger the fibrotic area caused by the healing scar, the stiffer the environment becomes. In this way, a "vicious feedback loop" is created wherein mechanical tension results in continuously stimulated fibroblast overactivation and excessive production of ECM which can subsequently progress to pathologic scarring and contraction (113). As we continue to understand fibroblast behavior in the wound microenvironment, we grow closer to identifying targets for translational therapies for pathologic scarring. Mechanotransduction continues to be identified as a critical mediator of scar formation and may well hold the key to scarless healing.
Author contributions
CB led the writing process and created the figures included in this article. MD and AM made significant and equal text contributions. MG, NL, LK, JP, and JG contributed significantly to the conceptualization, literature review, and editing process. ML and DW are senior authors and experts in the field of wound healing and fibroblast mechanics, who took the lead on conceptualization and expert insight for this invited review article. All authors contributed to the article and approved the submitted version.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2023-04-20T13:33:15.400Z | 2023-04-18T00:00:00.000 | {
"year": 2023,
"sha1": "3414718e99b66d80e7923813a469407fc996d222",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "3414718e99b66d80e7923813a469407fc996d222",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265424538 | pes2o/s2orc | v3-fos-license | Large-Scale Multicast Group Secure Transmission Scheme Based on Multi-Carrier FDA
Aiming at the problem that the traditional physical layer technology cannot realize secure transmission due to the large number of users and wide dispersion in the multicast system, a layered transmission method is proposed, and a scheme for the secure transmission of the multicast group physical layer is designed. Firstly, the hierarchical transmission system model is established. Then, the array weighted vector of each layer is optimized according to the design criterion of maximizing the artificial noise interference power. At the same time, in the case where the number of users in a single multicast group is greater than the number of transmitting antennas, a multicast grouping strategy is introduced, and the singular value decomposition and Lagrange multiplier algorithms are utilized to obtain the optimal solution. Simulation results show that the proposed method can realize the secure communication of users with different distances in the same direction and can distinguish the multicast users with the same direction angle and different distances under the premise of mutual non-interference, thus realizing the secure communication of large-scale multicast users.
Introduction
With the advancement of mobile communication technology, wireless communication has been widely used in military and civilian fields, and new wireless systems such as the Internet of Things [1], Unmanned Aerial Vehicle(UAV) communication [2], and vehicular networks [3] have also been developed vigorously.However, due to the openness of wireless channels, the transmitted information is easily intercepted or eavesdropped on by eavesdroppers [4][5][6].It is especially important to ensure the security of information in today's world, where everything is interconnected, and the amount of data in wireless communication is increasing.Information security will become the focus and difficulty of wireless communication development.Traditional wireless communication mainly relies on high-level encryption technology to ensure the security of information [7].With the increase in mobile devices and the emergence of supercomputers, the problem of information leakage in wireless communication relying on high-level encryption technology has emerged continuously.Information security faces increasing challenges.
Unlike traditional encryption techniques, physical layer security techniques [8] utilize the inherent properties of the wireless channel to provide more basic security for wireless communications.Array antenna [9] is an emerging multi-antenna physical layer security technique that utilizes the difference between the desired channel and the eavesdropping channel to achieve secure transmission of information [10,11].Currently, there are still problems in the theoretical research and practical application of array antenna physical layer security technology that need to be urgently addressed.In practical communication scenarios, eavesdroppers often passively receive information with a certain degree of concealment.In actual communication scenarios, eavesdroppers often receive information passively, with a certain degree of concealment.The transmitter cannot accurately obtain the eavesdropper's location information, thus reducing the security performance of the system.At the same time, in previous studies, mostly the physical layer secure communication problems of single-expectant-user communication systems or multi-expectant-user broadcast communication systems have been studied, and there have been fewer studies on complex multicast group communication systems.For example, Qiu et al. investigated the secure transmission problem of broadcast secure communication systems with multiple desired targets and a single eavesdropper [12].Gao et al. solved the physical layer secure communication problem with a single legitimate target when the eavesdropper's location is unknown by using the method of maximizing artificial noise [13].If the secure unicast communication technique is directly applied to multicast communication, it will not be able to add effective artificial noise in the orthogonal space of the legitimate channel vectors, thus not guaranteeing secure data transmission.Secure communication schemes in the literature [13,14] are mainly based on the principle of maximizing the secrecy capacity and studying the secure multicast group communication problem when the eavesdropper is unknown.The literature [15] analyzes a multicast secure communication technique using a two-dimensional beam assignment combined with artificial noise.Some literature [16] has investigated non-orthogonal multiple access (NOMA) assisted secure offloading for vehicular edge computing (VEC) networks in the presence of multiple malicious eavesdroppers.The Physical Layer Security (PLS) technique is employed to provide jamming signals to eavesdroppers without interfering with real users.Danyu Diao et al. [17] proposed a joint power allocation and air jamming (PAAJ) scheme to achieve reliable and secure communication of the system in the presence of malicious eavesdroppers.Bingcai Chen et al. used multiple antenna relays to enhance the communication and to obtain the diversity gain [18].The above literature research methods are limited by the number of multicast users and the number of eavesdroppers.In scenarios where a large number of multicast users and eavesdroppers coexist, traditional physical layer security techniques cannot solve the problem of secure transmission of large-scale multicast groups well.However, the research on physical layer large-scale multicast group security transmission technology is of great academic research value and practical guidance significance for supporting the communication demand of 6G large-scale devices and realizing the significant improvement of 6G communication network security performance.
Multicast group communication [19] can provide large-capacity and diverse data services for a large number of devices, which makes it suitable for large-scale wireless communication systems.Therefore, multicast group communication is characterized by a large number of desired users and a wide range of spatial dispersion, and the traditional directional modulation technique cannot realize the secure communication of users with different distances in the same direction.In addition, the artificial noise scrambling technique mainly lies in constructing the zero space of the desired channel, which is limited by the number of transmitting antennas (the number of transmitting antennas is larger than the number of desired users).Traditional physical layer security techniques cannot perfectly guarantee the security of multicast systems, and more dimensional redundancy needs to be introduced to solve such problems.Based on this, the main contributions of this paper are: A layered transmission method is proposed, and the transmitter transmits the confidential information in layers while an artificial noise matrix is designed to ensure transmission efficiency.
2.
User grouping strategy is added to the traditional method, and a multicast user grouping multi-carrier frequency controlled array secure transmission scheme is proposed.
System Model
In this section, firstly, the model schematic of the multicast group system is introduced, and the structure of the multi-carrier frequency controlled array (MFCA) transmitter is analyzed; then, the theoretical derivation and a secure transmission scheme based on the MFCA in the physical layer is proposed.
Consider the basic multicast system as shown in Figure 1.The system contains a transmitting base station and J multicast groups while assuming the presence of one or more passively receiving eavesdroppers around each multicast group.Users within the same multicast group receive the same multicast message, and users within the same multicast group can be spatially dispersed.Assume that the jth multicast group contains G j legitimate stations.The number of expected users in multicast group communication systems is large, and the traditional physical layer secure transmission technique cannot perfectly guarantee the communication security of multicast systems.Therefore, a physical layer secure transmission scheme based on a multi-carrier frequency-controlled array is proposed.First, the multi-carrier frequency-controlled array transmitter is designed, as shown in Figure 2. The frequency-controlled array antenna consists of N equally spaced omni-directional antennas linearly distributed, assuming that the spacing between neighboring antennas is d.The position of the first transmitter antenna is determined to be the origin of the system coordinates, and the effect of multipath transmission is ignored.Each transmitting antenna in the multi-carrier frequency-controlled array transmitter no longer transmits the traditional single-carrier signal but transmits multi-carrier signals with different frequencies.As shown in Figure 2, the number of subcarriers of each antenna in the multi-carrier frequency-controlled array transmitter is J.The subcarriers of the same order of each antenna are defined as a layer.
The subcarrier of the nth transmitting antenna in the jth layer is denoted as where f j,c is the carrier frequency of the jth layer; ∆ f j,n is the frequency offset between different antennas in the jth layer.
The energy of the radiated signal from the jth layer received by the far-field target (r, θ) is expressed as where r denotes the distance from the coordinate origin far-field target user; r n denotes the distance from the nth antenna to the target user; c denotes the speed of light.
Then, for the far-field target (r, θ), the jth layer array guidance vector is written as Thus, the energy of all layers transmitted signals at the far-field target point is expressed as Then, the guidance vector of the multi-carrier array is In this case h L ∈ C N J×1 is the N J dimensional vector.The application scale of the multicast system is expanded without increasing the transmitting antenna.
Multi-Carrier Massive Multicast Group Secure Communication Methods
In this section, based on the multi-carrier frequency controlled array transmitter, the layered transmission method is proposed, and based on this, the multi-carrier large-scale multicast group secure communication method is designed.
The number of available carriers per transmitter antenna is J; that is, the system is divided into J layers for transmission.First, the users receiving the same information are divided into the same multicast group, while the number of users within this multicast group is less than the number of antennas at the transmitter.Second, the transmitter transmits the confidential information in layers, and the jth layer transmits the modulation symbol x j (t), which corresponds to all the users within the multicast group j.The layered design of the array weighting vector w j ensures that the users within multicast group j receive the signal normally without interfering with other users within the multicast group.Subsequently, artificial noise is introduced before each antenna radiates the signal, and the artificial noise matrix is also designed so that the artificial noise is transmitted in the zero space of the desired channel, which interferes with the eavesdroppers while not affecting the users.Finally, at the receiving end, the multicast group selects the appropriate subcarrier to demodulate the received signal.
The transmit signal vector where: w j represents the jth-layer array weighted vector; P AN stands for artificial noise emission power; w j (t) denotes the artificial noise orthogonal projection matrix; j denotes the artificial noise vector that follows a normal distribution, i.e., Assume that the location of the-gth user within multicast group j is (r L,j g , θ L,j g ), The user's guiding vector is h L, With this, the co-direction matrix of users within a multicast group j is denoted as The layer j orientation matrix for a user within multicast group j is denoted as The received signal vector of multicast group j is represented as where: n L,j (t) ∼ CN(0, σ 2 L,j I G j ) denotes the additive Gaussian white noise matrix of the multicast group j.
Observing Equation ( 9), it can be found that the received signal vector of multicast group j consists of four parts.The first part is the modulated symbol x j (t) received by multicast group j transmitted by layer j; the second part is the interference signal transmitted by other layers; the third part is the artificial noise; and the fourth part is the channel noise.
First, the layer j array weighting vector w j is designed with the design criterion of ensuring that users within a multicast group j receive signals normally while not interfering with users within other multicast groups.One or more eavesdroppers exist around or inside each multicast group to steal information, and it is often impossible to obtain the information of eavesdroppers in actual wireless communication.At this time, it mainly relies on artificial noise to interfere with the eavesdropper so that it can not properly demodulate the confidential information.Therefore, in the case of ensuring that the user receives the signal normally maximizes the artificial noise transmit power, the optimization problem is described as max (10) where ] denotes the layer j orientation matrix of the remaining multicast group j outside the multicast group.
, ξ j,2 , . . ., ξ j,g , . . .ξ j,G j ] T ,ξ j,g indicates the minimum receive power required by users in a multicast group.L −j = J ∑ i=1,i =j G i denotes the number of users remaining outside of multicast group j.Constraint 1 indicates that all users in the multicast group j receive the transmission signals of layer j and meet the minimum receive power requirement.Constraint 2 indicates that users outside the multicast group j are unable to receive the transmission signals of layer j.The total system transmit power is P s = P AN + J ∑ j=1 P L,j , Where P L,j represent the transmitted signal power of layer j.It is easy to find that the transmitting power of artificial noise can be maximized by means of the method of minimum modulation symbol transmitting power, and the optimization problem can be rewritten as min To solve this optimization problem, a singular value decomposition of the matrix According to the singular value decomposition theorem, we can get According to the Lagrange multiplier method, the jth layer optimal array weighted vector is obtained Next, the artificial noise matrix is calculated.According to the criterion that artificial noise interferes with the eavesdropper without affecting the signal received by the user, the artificial noise matrix is calculated by the following equation where: tr{•} denotes the trace of the matrix; ] represents the oriented vector matrix of all multicast groups.
It is easy to find that the multicast group's guiding vector matrix where G i indicates the number of users in all multicast groups.According to the null space mapping criterion, the artificial noise matrix can be calculated when The traditional artificial noise technology is based on the single carrier transmitting antenna array.At this time, the number of transmitting antennas must be greater than the number of multicast users; that is, N > L T can realize the interference with the eavesdropper without affecting the user's receiving signal.However, the artificial noise jamming technology based on a multi-carrier frequency array can still realize the interference of eavesdroppers when the number of users is expected to be N < L T < N J without affecting the received signal of users.Without increasing the transmission antenna, the scale of the multicast system is expanded to ensure more users communicate securely.
Solution Extension
In this section, based on the theoretical analysis in the previous two sections, the communication problem when the number of multicast groups and the number of carriers are unequal is further discussed in depth, and a solution is given for the problem, which proposes a secure transmission scheme for multicast user groups with multi-carrier frequency-controlled arrays.
The multi-carrier large-scale multicast group secure transmission scheme proposed above is carried out under the assumption that the number of multicast groups and the number of multi-carriers are equal, and the multicast secure communication problem should be analyzed further when the number of multicast groups and the number of carriers are not equal.It is assumed that the number of multicast groups is Q and the number of carriers in the transmitting antenna array is J.
When the number of carriers in the transmitting antenna array is greater than the number of multicast groups, that is, J > Q, Q carriers can be randomly selected for secure multicast group communication.In this case, there are a total of C J Q options.Rational use of the randomness of transmitted carriers can improve the security of multicast group communication.However, the number of carriers in the antenna array J is greater than the number of multicast groups Q, and the number of users in the multicast group q is greater than the number of transmitting antennas, that is, G q > N. In this case, secure communication in the multicast group q cannot be guaranteed.Aiming at the problem of secure communication in such scenarios, we continue to improve the proposed scheme by adding the user grouping strategy to the original method and propose a multicast user grouping scheme for secure transmission of multi-carrier frequency-controlled arrays.
When the number of users in the multicast group q is greater than the number of transmitting antennas, a user grouping strategy is introduced to regroup the users in the multicast group q.The basic principle is to divide the users whose guidance vector approaches into the same group, which is convenient for the transmitter to optimize the array weighted vector.The number of multicast group q packets is limited to G q /N , where • indicates that the multicast group is rounded up and smaller than J − Q.Under this limit, the number of users in each group can be ensured to be less than the number of transmitter antennas, and the number of carriers in the transmitting antenna array can be reduced to avoid the waste of wireless spectrum resources.Define the average channel similarity of expected user i as The user grouping strategy consists of two phases, i.e., the group head selection phase and the grouping phase.In the group head selection phase, the average channel similarity of each user is first calculated and subsequently ranked, and the top G q /N users are selected as group heads.In the grouping phase, the remaining users are grouped together with the group heads with similar guidance vectors.
When the number of transmitter array antenna carriers is less than the number of multicast groups, that is, J < Q, and multicast user L T < N J, secure communication in the multicast system can only be achieved by continuing to increase the number of carriers at the transmitter side.At this time, the number of users in each multicast group is less, which can achieve a larger security capacity but increases the consumption of wireless resources and lower spectrum utilization.
Security Performance Analysis
In order to evaluate the security level of wireless communication systems, the security capacity is the main technical indicator.In this section, the proposed method is evaluated for security using security capacity as a criterion.
The Signal-to-Interference-plus-Noise Ratio (SINR) of the multicast group is expressed as According to Equation ( 18), the maximum reachable rate of multicast group j is obtained For multicast group j, the receiver SINR of the eavesdropper is According to Equation (20), the maximum rate that can be achieved from transmitter to eavesdropper is obtained Combined with Equations ( 19) and (21), the security capacity of multicast group j in the proposed scheme is expressed as At the same time, the average safe capacity of the system is expressed as
Simulation Results and Analysis
In this section, the distribution of SINR under different conditions is comprehensively analyzed based on the artificial noise interference power.Finally, comparing the method proposed in this paper with the traditional phased array multicast group secure transmission method, it is proved that the method in this paper can realize the secure communication of users with different distances in the same direction and, at the same time, it can distinguish the multicast users with different distances in the same direction angle.
Figure 3 simulates the artificial noise interference power distribution.From the figure, it can be seen that the artificial noise interference power forms a zero-sag at the multicast user location while it is uniformly distributed in the rest of the space.This indicates that when the multicast users are less than the transmitting antennas, both the single-carrier secure communication method and the multi-carrier secure transmission method can be designed to realize that the artificial noise does not affect the multicast users while interfering with the eavesdroppers by designing the artificial noise matrix.When the number of multicast users is L T = 9, the artificial noise interference power distribution of the multi-carrier secure transmission method is shown in Figure 4. From the figure, it can be seen that the artificial noise interference power forms zero traps at all multicast user locations.This indicates that the artificial noise technique based on a multi-carrier frequency-controlled array can ensure secure communication for more users without increasing the transmission antenna.Subsequently, the received SINR distribution of the multicast group is analyzed.Figure 5 corresponds to multicast group1 and Figure 6 corresponds to multicast group2.Analyzing Figure 5, it is found that the peak SINR occurs at the user location of multicast group1 and is minimal at the user location of multicast group2.In addition to this, SINR is also very small at other locations.Analyzing Figure 6, a similar situation with multicast group1 can be obtained, which shows that after the optimization of the proposed scheme, the multicast group effectively receives the corresponding confidential signals, and the other multicast groups are unable to receive the signals that are not their own signals, and the eavesdroppers are even more unable to steal any confidential information.At the same time, compared with the traditional phased-array multicast group secure transmission method, the proposed method realizes the secure communication of users with different distances in the same direction and is able to distinguish multicast users with the same direction angle and different distances that are user (r 1,2 , θ 1,2 ) = (1400 m, 40 Figure 7 analyzes the relationship between the system security capacity of the proposed multi-carrier secure transmission method and the number of multicast users and compares it with the single-carrier secure transmission method.As can be seen from the figure, both the traditional single-carrier secure transmission method and the proposed method can provide good secure communication when fewer multicast users are expected.It is noteworthy that at this time, the average secure capacity of the proposed scheme is better than that of the single-carrier secure transmission method.This is mainly due to the fact that the proposed method has higher degrees of freedom for preferences, and the optimization algorithm for maximizing the artificial noise transmit power is proposed in the transmission method.When the number of multicast users increases, the security performance of the conventional single-carrier multicast group transmission method gradually decreases, and the difference in the average security capacity with the proposed method rapidly widens.
Next, Figure 8 as well as Figure 9 verify the feasibility of the multicast grouping strategy in terms of the number of frequency-controlled array transmitter antennas N = 8, the number of available carriers per transmitter antenna as J = 3, the number of multicast groups Q = 2, the total number of multicast users L T = 15, and the number of users in multicast group1 as G 1 = 12.The spatial distribution of user locations in multicast1 is given in Figure 8a.Using the multicast grouping strategy, the users in multicast group1 form two independent multicast groups as shown in Figure 8b.Observation of the number of users and spatial locations after grouping shows that the number of users in each multicast group after regrouping is less than the number of transmitting antennas, and the multicast grouping strategy is able to classify the desired users with similar channels into a group, which is favorable for beam formation.Figure 9 analyzes the relationship between the average security capacity and the number of users for multicast group1.From the figure, it can be seen that for multicast group1, after the introduction of the multicast grouping strategy, the users in multicast group1 are regrouped, and the information can be subsequently transmitted by using efficient beamforming and artificial noise techniques, which improves the security performance of the system.Especially when the number of users in the multicast group1 is larger than the number of transmitting antennas, the multi-carrier communication method without a multicast grouping strategy cannot ensure the secure communication of multicast users, and the average security capacity of the system decreases rapidly.
Conclusions
In the paper, a multi-carrier frequency-controlled array large-scale multicast group physical layer secure transmission method is proposed.First, the multi-carrier frequencycontrolled array transmitter is designed to construct a large-scale multicast group system model.Second, the array weighting vector is optimized hierarchically with the objective of maximizing the artificial noise interference power while ensuring reliable signal reception by users, and the optimal solution is obtained using singular value decomposition and the Lagrange multiplier algorithm.Finally, through a large number of numerical simulations, it is shown that the multi-carrier large-scale multicast group secure transmission method realizes the secure and reliable communication of confidential information under a largescale communication system, and the users in each multicast group can effectively receive the information while the eavesdroppers are inhibited from intercepting the confidential information to the maximum extent.Compared with the traditional single-carrier secure transmission method, the proposed method can ensure more users communicate securely.In the future, our research focus will be mainly on the integration of the FDA with real life and further research on practical applications related to the FDA.
Figure 3 .
Figure 3. Artificial noise interference power distribution of multicast groups.
Figure 7 .
Figure 7.The relationship between the average security capacity of the system and the number of multicast users.
Figure 8 .
Figure 8. Spatial distribution of users in multicast group1.
Figure 9 .
Figure 9.The relationship between the average security capacity and the number of users. | 2023-11-25T16:07:30.318Z | 2023-11-23T00:00:00.000 | {
"year": 2023,
"sha1": "e842799cc9e60ec449a904faaaa34f56156807bd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/23/9358/pdf?version=1700739688",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13c3f226183da68758edf69e6c702a6769a8b025",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
244729291 | pes2o/s2orc | v3-fos-license | An extension of Thomassen's result on choosability
Thomassen proved that all planar graphs are $5$-choosable. \v{S}krekovski strengthened the result by showing that all $K_{5}$-minor-free graphs are $5$-choosable. Dvo\v{r}\'{a}k and Postle pointed out that all planar graphs are DP-$5$-colorable. In this note, we first improve these results by showing that every $K_{5}$-minor-free or $K_{3, 3}$-minor-free graph is DP-$5$-colorable. In the final section, we further improve these results under the term strictly $f$-degenerate transversal.
Let G be a graph and L be a list assignment for G. For each vertex v ∈ V (G), we associate it with a set L v = {v} × L(v); for each edge uv ∈ E(G), we associate it with a matching M uv between L u and L v . Let M = uv∈E(G) M uv , and we call M the matching assignment over L. The matching assignment M is called a k-matching assignment if L(v) = {1, 2, . . . , k} for every v ∈ V (G). A cover of G is a graph H L,M (simply write H) meeting two conditions: • the vertex set of H is the disjoint union of L v for all v ∈ V (G); and • the edge set of H is the matching assignment M .
Let G be a graph and H be a cover of G over a list assignment L. An (L, M )-coloring of G is an independent set I of H such that |I ∩ L v | = 1 for each v ∈ V (G). A graph G is DP-k-colorable if for any list assignment L(v) ⊇ {1, 2, . . . , k} and any matching assignment M , it admits an (L, M )-coloring. Note that every DP-k-colorable graph is k-choosable.
Dvořák and Postle [2] have pointed out that all planar graphs are DP-5-colorable. We improve the result to the following Theorem 1.1, and we also extend the result for planar graphs to the class of K 3,3 -minor-free graphs. Theorem 1.1. All K 5 -minor-free graphs are DP-5-colorable. Theorem 1.2. All K 3,3 -minor-free graphs are DP-5-colorable.
Let H be a cover of G, and let f be a function from In other words, all the vertices of H[T ] can be ordered as x 1 , x 2 , . . . , x n such that each vertex x i has less than f (x i ) neighbors on the right * wangtao@henu.edu.cn; https://orcid.org/0000-0001-9732-1617 hand side. Such an order is an f -removing order, and the reverse order x n , x n−1 , . . . , x 1 is an f -coloring order.
By definition, a vertex x can never be chosen in a strictly f -degenerate transversal if f (x) = 0. Hence, we can add some vertices into L v and define the value of f to be zero on these new vertices, so that all the L v have the same cardinality. On the other hand, it doesn't matter what the labels of the vertices are, so we may assume that L v = {v} × [s], where s is an integer. A cover H together with a function f is called a valued-cover.
In Section 3, we strengthen Theorems 1.1 and 1.2 to Theorem 1.3. In order to demonstrate how Thomassen's technique in [6] is extended, we first give a proof for Theorem 1.1 in Section 2, and then give one for Theorem 1.3, even though Theorems 1.1 and 1.2 are special cases of Theorem 1.3. For a function f , we use R f to denote the range of f .
Assume that G is a plane graph and C is a cycle in it. We will use Int(C) (resp. Ext(C)) to denote the subgraph induced by V (C) and the vertices inside (resp. outside) of C. The cycle C is a separating cycle of G if both the interior and the exterior of C have at least one vertex.
DP-5-coloring
A plane triangulation is an embedded plane graph such that each of its faces is bounded by a cycle of length three. A near-triangulation is an embedded plane graph such that each bounded face is bounded by a triangle and the unbounded face (outer face) is bounded by a cycle. An -sum of two graphs G and G is the graph G such that G = G ∪ G and G ∩ G = K .
The Wagner graph is a 3-regular graph with 8 vertices and 12 edges, see Fig. 1. Note that the Wagner graph is non-planar, thus the Wagner graph cannot be a subgraph of a planar graph. Wagner [10] gave the following characterization of planar graphs in terms of graph minors. Theorem 2.1 (Wagner [10]). A graph is planar if and only if it does not contain K 5 or K 3,3 as a minor. By Wagner's Theorem, the class of K 5 -minor-free graphs and the class of K 3,3 -minor-free graphs are two superclasses of planar graphs.
A graph G is maximal K 5 -minor-free if it does not contain K 5 as a minor, but G + xy contains a K 5minor for every pair nonadjacent vertices x and y in G. Wagner [10] also gave the following characterization of maximal K 5 -minor-free graphs.
Theorem 2.2 (Wagner [10]). Every maximal K 5 -minor-free graph can be obtained from the Wagner graph and plane triangulations by recursively 2-sums or 3-sums.
The following theorem and its proof are very similar to that in [6], but for completeness we give a complete proof here. Theorem 2.3. Assume that G is a near-triangulation such that the outer face is bounded by a cycle Proof. The assertion is proved by induction on |V (G)|. When G has only three vertices, G = O = K 3 and the assertion is obvious. So we can assume that |V (G)| ≥ 4 and the assertion is true for smaller graphs.
Applying the induction hypothesis to Int(C 1 ), R 0 can be extended to an (L, M )-coloring of Int(C 1 ).
After v i and v j are colored, it can be further extended to an (L, M )-coloring of Int(C 2 ). This yields a desired (L, M )-coloring of G.
So we can assume that O has no chord. Let v 1 , u 1 , u 2 , . . . , u m , v p−1 be the neighbors of v p in a natural cyclic order around v p . Since all the bounded faces of G are bounded by triangles and O has no chord, Theorem 2.4. Assume that G is a maximal K 5 -minor-free graph. If K is a subgraph of G isomorphic to K 2 or K 3 , then every DP-5-coloring ϕ of K can be extended to a DP-5-coloring of G.
Proof. Suppose to the contrary that G is a counterexample with |V (G)| as small as possible.
Assume that G is a plane triangulation and K is a separating 3-cycle of G. Note that Int(K) and Ext(K) are both plane triangulations and maximal K 5 -minor-free graphs. By minimality, every DP-5-coloring ϕ of K can be extended to a DP-5-coloring ϕ 1 of Int(K) and a DP-5-coloring ϕ 2 of Ext(K). Combining ϕ 1 and ϕ 2 yields a DP-5-coloring of G, a contradiction.
Assume that G is a plane triangulation and K = [x 1 x 2 x 3 ] bounds a 3-face. Note that G has at least four vertices. We can redraw the plane triangulation such that K is the boundary of the outer face. Note that G − x 3 is a near-triangulation. Since x 3 on K is precolored, every uncolored vertex incident with the outer face of G − x 3 has at least four admissible colors other than ϕ(x 3 ). Applying Theorem 2.3 to G − x 3 , we obtain a DP-5-coloring of G whose restriction on K is the precoloring ϕ.
Assume that G is a plane triangulation and K = y 1 y 2 . We can further assume that y 1 y 2 is incident with a 3-face [y 1 y 2 y 3 ]. Clearly, the precoloring of K can be extended to a DP-5-coloring of G[y 1 , y 2 , y 3 ], and we can reduce the problem to the previous case.
If G is the Wagner graph, then we can greedily extend the precoloring of K to a DP-5-coloring of G since G is 3-regular.
By Theorem 2.2, we can assume that G is a 2-sum or 3-sum of two maximal K 5 -minor-free graphs G 1 and G 2 with K ⊂ G 1 . By minimality, the precoloring ϕ of K can be extended to a DP-5-coloring ϕ 1 of G 1 . By minimality once again, we can extended the restriction of ϕ 1 on G 1 ∩ G 2 to G 2 . This yields a DP-5-coloring of G whose restriction on K is the precoloring ϕ. Now, we can easily prove Theorem 1.1. Theorem 1.1. All K 5 -minor-free graphs are DP-5-colorable.
Proof. Since every K 5 -minor-free graph is a spanning subgraph of a maximal K 5 -minor-free graph, it suffices to prove the result for maximal K 5 -minor-free graphs. We can first color two adjacent vertices in G, and extend the coloring to the whole graph according to Theorem 2.4.
Since the proof of the following result is similar to that in Theorem 2.4, we leave it as an exercise to the readers.
Theorem 2.6. Assume that G is a maximal K 3,3 -minor-free graph. If K is a subgraph of G isomorphic to K 2 , then every DP-5-coloring of K can be extended to a DP-5-coloring of G. Theorem 1.2. All K 3,3 -minor-free graphs are DP-5-colorable.
Proof. Since each K 3,3 -minor-free graph is a spanning subgraph of a maximal K 3,3 -minor-free graph, it suffices to show the result for maximal K 3,3 -minor-free graphs. We can first color two adjacent vertices in G, and further extend the precoloring to the whole graph according to Theorem 2.6.
Strictly f -degnerate transversal
In this section, we extend the results on DP-5-coloring to particular strictly f -degenerate transversal. The following two lemmas were presented by Nakprasit and Nakprasit [5, Lemma 2.3] with a different term.
For a vertex subset K of V (G), or a subgraph K of G, we use H K to denote the cover restricted on K, i.e., Proof. Let S be an f -removing order of T . Since f (x) = 1 for each x ∈ T , every vertex in T has no neighbor on the right of the order S , so we can move all the vertices in T to the rightest of the order. In other words, we can delete all the vertices in T from the order S and put the vertices in T on the right side of all the other vertices of S . Observe that the resulting order satisfies the desired condition. We first extend Theorem 2.3 to the following result. Note that Theorem 3.1 was first proved in [5, Theorem 1.6], but the following proof is a little bit different from that one. Theorem 3.1. Assume that G is a near-triangulation such that the outer face is bounded by a cycle and If R 0 is a strictly f -degenerate transversal of H[L v1 ∪L v2 ], then R 0 can be extended to a strictly f -degenerate transversal of H.
Proof. We prove the assertion by induction on |V (G)|. When G has exactly three vertices, G = O = K 3 and the assertion is obvious. Then |V (G)| ≥ 4 and the assertion is true for smaller graphs. Suppose that O has a chord uw. It follows that uw lies in two cycles C 1 and C 2 of O + uw with v 1 v 2 in C 1 . Let G 1 := Int(C 1 ) and G 2 := Int(C 2 ). Applying the induction hypothesis to G 1 , R 0 can be extended to a strictly f -degenerate transversal R of H 1 , and then R ∩ H[L u ∪ L w ] can be extended to a strictly f * -degenerate transversal R * of H * as in Lemma 3.2. Therefore, R * ∪ R is a desired strictly f -degenerate transversal of H. The other case is that O has no chord. Let v 1 , u 1 , u 2 , . . . , u m , v p−1 be the neighbors of v p in a natural cyclic order around v p , and let U = {u 1 , u 2 , . . . , u m }. Since all the bounded faces of G are bounded by triangles and O has no chord, we have otherwise.
It follows that, for each u ∈ O , we have z∈Lu f † (z) ≥ 3.
By induction hypothesis and Lemma 3.1, (H − L vp , f † ) contains a strictly f † -degenerate transversal R † with an f † -removing order S † such that the vertices in R 0 are on the rightest of the order. Let (v p , c p ) be a vertex in X * which is not adjacent to R † ∩ L vp−1 . Therefore, we insert (v p , c p ) into S † such that it is the reciprocal third element to obtain an f -removing order of a strictly f -degenerate transversal of H.
By induction hypothesis, (H − L vp , f † ) admits a strictly f † -degenerate transversal R † with an f † -removing order S † such that the vertices in R 0 are on the rightest of the order. Let S be a sequence obtained from S † by inserting (v p , 1) into S † such that (v p , 1) is the immediate predecessor of (v p−1 , c p−1 ), where (v p−1 , c p−1 ) ∈ L vp−1 ∩ R † . Recall that f † (v p , 1) = 2, it is not hard to check that S is an f -removing order of a strictly f -degenerate transversal of H.
Instead of proving Theorem 1.3, we prove the following stronger theorem for K 5 -minor-free graphs, and leave the corresponding result for K 3,3 -minor-free graphs to the readers. Theorem 3.2. Assume that G is a K 5 -minor-free graph, and (H, f ) is a valued-cover with R f ⊆ {0, 1, 2}. If K is a subgraph isomorphic to K 2 or K 3 , and f (v, 1) + · · · + f (v, s) ≥ 5 for each v ∈ V (G), then every strictly f -degenerate transversal of H K can be extended to a strictly f -degenerate transversal of H.
Proof. Suppose to the contrary that (G, H, f, R 0 ) is a counterexample with |V (G)| as small as possible, where R 0 is a strictly f -degenerate transversal of H K . Similar to the previous results, we only need to consider the case that G is a maximal K 5 -minor-free graph.
Assume that G is a plane triangulation and K is a separating triangle of G. Note that Ext(K) and Int(K) are both plane triangulations and maximal K 5 -minor-free graphs. By minimality and Lemma 3.2, R 0 can be extended to a strictly f -degenerate transversal of H.
Assume that G is a plane triangulation and K = [x 1 x 2 x 3 ] bounds a 3-face. We can redraw the plane triangulation such that K bounds the outer face. Let (x 3 , c 3 ) be in R 0 , define a function f on H − L x3 by Note that the graph G − x 3 is a near-triangulation. Since the range of f is a subset of {0, 1, 2}, we have that, for each w on the outer face of G − x 3 , By Theorem 3.1, R 0 \ {(x 3 , c 3 )} can be extended to a strictly f -degenerate transversal of H \ L x3 with an f -removing order S such that the two vertices in R 0 \ {(x 3 , c 3 )} are on the rightest of the order. According to an f -removing order of R 0 , we can insert (x 3 , c 3 ) into S such that the three vertices in R 0 are the three rightest elements in the order to obtain an f -removing order of a strictly f -degenerate transversal of H. Assume that G is a plane triangulation and K = x 1 x 2 . We may assume that x 1 x 2 is incident with a 3-face [x 1 x 2 x 3 ]. Clearly, R 0 can be extended to a strictly f -degenerate transversal of H [x1,x2,x3] , and we can reduce the problem to the previous case.
If G is the Wagner graph, then we can greedily extend R 0 to a strictly f -degenerate transversal of H since G is 3-regular.
By Theorem 2.2, assume that G is a 2-sum or 3-sum of two maximal K 5 -minor-free graphs G 1 and G 2 with K ⊂ G 1 . By minimality and Lemma 3.2, R 0 can be extended to a strictly f -degenerate transversal of H.
In Theorems 3.1 and 3.2, there is a restriction on f , i.e., the range of f is a subset of {0, 1, 2}. If the restriction can be dropped, the results can imply two theorems due to Thomassen. Thomassen proved that every planar graph can be partitioned into a 3-degenerate graph and an independent set [8], and every planar graph can be partitioned into a 2-degenerate graph and a forest [7]. So the second author and some others made the following conjecture in [4].
Conjecture. Assume that G is a planar graph and (H, f ) is a positive-valued cover. If s ≥ 2 and f (v, 1) + · · · + f (v, s) ≥ 5 for each v ∈ V (G), then H admits a strictly f -degenerate transversal. | 2021-12-01T02:15:52.662Z | 2021-11-30T00:00:00.000 | {
"year": 2021,
"sha1": "90b81d81d378049c8e596d532af4cad2aff3f513",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "90b81d81d378049c8e596d532af4cad2aff3f513",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
229686009 | pes2o/s2orc | v3-fos-license | Are Reallocations between Sedentary Behaviour and Physical Activity Associated with Better Sleep in Adults Aged 55+ Years? An Isotemporal Substitution Analysis
Physical activity has been proposed as an effective alternative treatment option for the increasing occurrence of sleep problems in older adults. Although higher physical activity levels are associated with better sleep, the association between specific physical activity intensities and sedentary behaviour (SB) with sleep remains unclear. This study examines the associations of statistically modelled time reallocations between sedentary time and different physical activity intensities with sleep outcomes using isotemporal substitution analysis. Device-measured physical activity data and both objective and subjective sleep data were collected from 439 adults aged 55+ years. Replacing 30 min of SB with moderate to vigorous intensity physical activity (MVPA) was significantly associated with an increased number of awakenings. Moreover, a reallocation of 30 min between light physical activity (LPA) and MVPA was significantly associated with increased sleep efficiency. Furthermore, reallocating 30 min of SB to LPA showed a significant association with decreased sleep efficiency. There were no significant associations of time reallocations for wake time after sleep onset, length of awakenings, and sleep quality. These results improve our understanding of the interrelationships between different intensities of movement behaviours and several aspects of sleep in older adults.
Introduction
Ageing is associated with an increased prevalence of sleep problems [1,2]. Approximately 50% of older adults suffer from these age-related changes in sleep, including (1) spending more time in bed but less time asleep, (2) more disrupted and less deep sleep, and (3) more frequent early risings [3][4][5]. These changes in sleep are problematic, as they cause fatigue, daytime sleepiness, and excess napping during the day, which in turn can influence the sleep at night [6,7]. In addition, they can negatively affect physical functioning and quality of life, as well as cognitive function and mental health [1,4,6,8,9]. Most sleep problems in older adults are currently treated with medication [10]. However, these medications can cause side effects, such as blurred vision, dizziness, sedation, anxiety, and fatigue [11][12][13]. Moreover, these medications are not always effective or safe in the knowledge is important because of (1) the increasing number of older adults globally and the known positive associations between PA and sleep in older adults, (2) the public health burden of age-associated declines in sleep and the high prevalence of sleep problems, and (3) the far-reaching consequences of sleep problems for all aspects of health and the reliance on sleep medication, often accompanied by negative side effects in older adults.
Therefore, the aim of this study is to examine the associations of time replacement between device-based assessed SB and PA (of light and moderate to vigorous intensity) with objective and subjective sleep outcomes in older adults.
Design and Sample
Cross-sectional data were gathered from community-dwelling older adults (aged ≥ 55 years) from July 2018 to July 2019. These adults were recruited at weekly meetings held by a community service organization providing PA and socio-cultural activities for older adults in Flanders (OKRA SPORT+). Exclusion criteria were older adults who were not able to attend the meetings due to limited physical mobility. The ethics committee of UZ Leuven granted approval for this study (Ref. no: S61581). All participants received study information prior to the study and granted written informed consent.
Variables and Measurement
Data were collected using self-administered questionnaires (demographics and Pittsburg Sleep Quality Index (PSQI)) and accelerometers (sleep efficiency, wake after sleep onset (WASO), and number and length of awakenings).
Questionnaires
Demographic variables (age, gender, education, and marital status), as well as general health information (smoking, use of alcohol, caffeine or screen time before bedtime, use of sleep medication, and presence of chronic conditions), were collected by means of a questionnaire. The definition of categories for education are in line with the International Standard Classification of Education (ISCED) 2011 [48]. Sleep quality was assessed with the PSQI questionnaire, a frequently used reliable and validated 19-item self-reported questionnaire in a variety of adult and older adult populations [48][49][50][51][52][53][54]. This questionnaire determines subjective sleep quality over the last month, and contains seven sleep characteristics, including sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleep medication, and daytime dysfunction. The total PSQI score ranges from 0 to 21 points, with five points or higher indicating poor-quality sleep [48,50]. Compared with other subjective measuring methods, the PSQI is easy to complete for older adults, and provides highly reliable and valid measures of sleep quality (Cronbach's α 0.83) [52][53][54].
Accelerometers
Movement behaviours (SB, LPA, and MVPA) and sleep outcomes (sleep efficiency, WASO, and amount and length of awakenings) were measured through accelerometry (Actigraph type wGT3X-BT, Actigraphcorp, Pensacola, FL, USA). The Actigraph wGT3X-BT is a wrist-worn accelerometer that measures and records physical movement associated with daily activity and sleep [55]. All participants were asked to wear the Actigraph device on their non-dominant wrist for six consecutive days, including two weekend days and five nights. The Actigraph wGT3X-BT has been used in numerous studies to measure SB, PA, and sleep in older adults, and resulted in valid measurements for this target population [56][57][58][59][60]. Accelerometer data were processed using well-established validated algorithms available in the Actilife software package (Actilife, v6.13.4) for wear time validation [56], PA [57,61] and sleep/wake identification [62]. Only data gathered over a minimum of four wear days of at least 10 h of waking wear time data were included in the analysis [56,62,63].
Analysis
Associations between different time use within specific types of movement behaviours and sleep outcomes in older adults were examined using isotemporal substitution models (ISMs). ISMs were first introduced in 2009 [40], and make it possible to examine associations of absolute time reallocation (i.e., 30 min) between movement behaviours (i.e., SB, LPA, and MVPA) with both objective and subjective sleep as outcomes. These reallocations of time are based on statistical modelling rather than on real-life changes in movement behaviours. More specifically, in this paper, we examined the associations between 30-min time reallocations from SB to LPA, from LPA to MVPA, and from SB and MVPA with subjective sleep quality and objectively measured sleep efficiency, WASO, and the number and length of awakenings. The behaviour in which time was being reallocated from was omitted from the model. Resulting coefficients of the remaining behaviours represented the association of reallocating 30 min from one behaviour (omitted) to another behaviour (included). These analyses are not indicative of individual changes in behaviour, rather, they model a theoretical shift in behaviour at a population level.
Three multiple linear regression models were composed: (1) a crude model (unadjusted without potential control variables), (2) a partially adjusted model adjusted for demographics, i.e., age, gender, education, and marital and professional status, and (3) a fully adjusted model adjusted for demographics and covariates that have shown to impact sleep, i.e., smoking [64][65][66], the use of alcohol [67][68][69][70], caffeine [71], screen time before bedtime [72,73], the use of sleep medication [74], and the presence of chronic conditions [75]. Our data met the assumptions that apply for linear regression models. All analyses were performed using SPSS version 24.0. Statistical significance was set at p ≤ 0.05.
Results
In total, 453 older adults participated in this study. Due to the minimum wear time compliance of four wear days of at least 10 h, data from 439 older adults (97%) were eligible for analysis. The study sample therefore consisted of 439 older adults (mean age: 71.6 years), of which 28% were males and 71% were females. See Table 1 for a detailed overview of participant characteristics. In short, 45% of the study sample was low-educated, and the majority of the sample were married or living together (76%) and were no longer professionally active (94%). Most of the study participants were non-smokers (94%) and did not use any alcohol (74%) or caffeine (82%) before bedtime. In 89% of the participants, screen time was present before bedtime. Although 53% of the participants reported the presence of chronic conditions, only 13% of the participants tended to take sleep medications. Movement behaviours and sleep parameters are reported in Table 2.
The results from all three ISM models (unadjusted, partially adjusted, and fully adjusted) are reported in Table 3. There were no significant associations between reallocations from SB to LPA, SB to MVPA, and LPA to MVPA with WASO, length of awakenings, and sleep quality (PSQI). For sleep efficiency, there was a significant negative association (i.e., lower efficiency) of replacement of 30 min of SB with LPA in the unadjusted (B = −6.19; 95% CI:
Discussion
This is the first large-scale study that used isotemporal substitution analysis to examine the associations of reallocating device-based measurement of SB and different intensities of PA with objective and subjective sleep in older adults. There were significant associations of replacing time between movement behaviours with sleep efficiency and the number of awakenings. More specifically, replacing 30 min of SB to LPA was associated with lower sleep efficiency in the unadjusted, the partially adjusted, and the fully adjusted models. By contrast, replacing 30 min of LPA with MVPA was associated with better sleep efficiency in the unadjusted model. Furthermore, replacing 30 min of SB with MVPA was associated with an increased number of awakenings in the unadjusted model. There were no statistically significant associations of replacing movement behaviours for wake time after sleep onset (WASO), length of awakenings, or sleep quality.
To our knowledge, the associations with sleep in older adults have been studied before in a smaller sample of Japanese adults [47]. This Japanese study included 70 adults aged 65+ years who did not have pre-existing diagnosed sleep problems and who did not use sleep medication. In that study, SB, PA, and sleep were also measured using Actigraph accelerometers and the PSQI. Their findings supported the positive association of replacing 30 min of SB with MVPA with the number of awakenings in our study. However, these authors reported additional associations of replacing 30 min of SB with LPA with better WASO, improved sleep fragmentation, and improved PSQI. Furthermore, replacing 30 min of SB with LPA was associated with an increased sleep efficiency in that study, whereas we found a negative association for sleep efficiency for this specific time reallocation. It should be noted though that we did not exclude older adults with pre-existing diagnosed sleep problems nor older adults who did use sleep medication. Moreover, compared with the participants in our study, participants in this Japanese study showed lower MVPA levels, a lower sleep efficiency, a higher WASO, and a higher number of awakenings. These differences in PA and sleep outcomes could account for the divergent results, because there was more room for improvement in the Japanese participants' sleep.
The lack of more significant associations of time reallocations in this study may be surprising given the previously observed positive associations of both LPA and MVPA with WASO, sleep quality, sleep latency, and sleep disturbances [9,[17][18][19], and given the negative associations of SB with sleep efficiency and sleep disturbances [35][36][37][38]. Potential explanations for this lack of more significant associations include (1) the measurement of SB, PA, and sleep, (2) the type of analysis (ISM), and (3) the characteristics of the included study sample in this study.
The Measurement of SB, PA, and Sleep
We collected subjective sleep data using a self-reported questionnaire (PSQI for sleep quality). Previous studies have already shown that the PSQI is a valid and reliable self-reported questionnaire to measure sleep quality in older adults [48,[50][51][52][53][54]. It is well accepted by older adults and widely used. For example, a recent review summarizing the effects of PA programs on sleep in older adults showed that the PSQI was used as the main outcome measure in all but one study [9].
Furthermore, we collected SB, PA, and sleep data with accelerometers. Accelerometry is considered the standard for objectively measuring SB, and has been shown to provide valid estimates of SB. Wrist-worn Actigraph accelerometers have been widely used for measuring PA, and have been shown to increase wear compliance in participants in free-living conditions [76][77][78][79][80]. In terms of sleep, we realize that polysomnography is considered the golden standard for measuring sleep objectively, providing detailed information about different sleep stages and sleep patterns [81][82][83]. However, collecting polysomnographic data is time-consuming and expensive, and therefore not suitable for large-scale studies [81][82][83]. The use of (non-dominant) wrist-worn accelerometers provides sleep data when people reside in their own natural environment, and has also been shown to be valid in older adults [79,82,84]. Although, one may argue that the placement of the accelerometer on the wrist, rather than on the upper leg, may have affected the availability to distinguish SB from LPA, as the accuracy of measuring SB using accelerometers may depend on the wear location [85][86][87].
As stated in previous studies, objective and subjective sleep measures should ideally be combined to obtain comprehensive insight into different aspects of sleep quality and quantity. Interestingly, in the present paper, there were only statistically significant associations of reallocating time with device-based measured sleep efficiency and the number of awakenings. There were no statistically significant associations for the subjectively measured sleep quality, despite the reported associations between PA and sleep quality in previous research [9,47].
Despite the fact that the Actigraph wGT3X-BT has been shown to provide valid measurements for this target population for SB, PA, and sleep [55][56][57][58][59], using a single device that was wrist-worn could have affected the measurement of SB, PA, and sleep in this study, as the cut-off between SB and LPA and SB and sleep could be too closely aligned and difficult to distinguish [80,88]. This could have influenced the results in this study. Further research should therefore focus on integrating methods to assess SB, PA, and sleep throughout 24 h using a single device.
The Type of Analysis (ISM)
Isotemporal substitution analysis has been used in different populations with different health outcomes, such as mortality, general health status, mental health, adiposity, physical fitness, cardiometabolic health, and sleep [39,[43][44][45]89,90]. According to a recent review in 2018 [39], the exchanged time between movement behaviours in studies using ISM varied from one minute to 120 min, with 30 min being the most common time reallocation. From a public health perspective, reallocating 30 min between movement behaviours seems more feasible and durable than longer periods of time (i.e., 60 min) to integrate in daily life situations, which is critical to sustain behaviour change in the long run [39,91,92]. Moreover, although vigorous intensity PA is also recommended for older adults on a weekly basis, we did not analyse the reallocation of time to vigorous intensity PA because of the low amounts of this type of intensity in our sample. This is a reflection of general population levels of PA [91,93,94], indicating that vigorous intensity activities are harder to maintain for longer periods of time and are therefore not durable in a daily lifestyle for older adults [91,95,96]. However, we reallocated time between SB and LPA with MVPA, which itself includes both moderate and vigorous intensity PA.
The Characteristics of the Study Sample
The demographic characteristics of the study sample are comparable to the Belgian population, as the majority of the population of older adults are also female (56%), married (56%), and no longer professionally active at the age of 65 years [97]. The average participant in this study was more physically active and less sedentary compared with the general Belgian population [91,93]. This could have affected the results in this study, as older adults with low PA levels are more likely to benefit from the reallocation of time, as this constitutes a larger proportional change. Future research should therefore use a random sampling procedure resulting in a more representative study sample in terms of PA levels. There should also be a specific effort to include the very old segment of the population (85+) given the rising prevalence of both physical inactivity and sleep problems with older age [1,2,98,99].
In terms of sleep, 13% of the participants used sleep medication on a daily basis in the present study. The average sleep efficiency of 94% was well above the cut-off for effective sleep of 85% [100]. Interestingly, the relatively low average PSQI global score for sleep quality of six points out of 21 indicates poor sleep [48,[50][51][52]. Thus, our study sample showed efficient sleep but poor sleep quality. It should be noted again that we did not exclude participants with pre-existing sleep problems, nor did we assess this in the questionnaire. However, we did adjust our analysis and controlled for sleep medication by adding this as a covariate in the fully adjusted models. Previous research showed a high prevalence of sleep problems when people were physically inactive [98]. Therefore, including only older adults with pre-existing sleep problems could have resulted in more positive significant associations, as problematic sleepers could experience a larger margin benefit when increasing their PA levels.
Potential Mechanisms for an Association between SB, PA, and Sleep
Potential mechanisms for a beneficial link between PA and sleep include the promotion of relaxation, blood circulation, and energy expenditure. In turn, these changes are beneficial to initiating and maintaining sleep [9,20,21]. Research also showed positive links between SB and sleep; relaxing before bedtime is often done in a sitting, reclining, or lying position, and can be beneficial for sleep, as it facilitates relaxation and helps to slow down from the day [99,[101][102][103][104]. Moreover, relaxing decreases stress hormones and allows the body to prepare for sleep [99,[102][103][104]. SB prior to bedtime could therefore be beneficial for sleep outcomes. Interestingly, the vast majority of our study sample (89%) did use screen time as a form of SB prior to bedtime. Compared with the earlier mentioned positive links between SB and sleep, screen time is associated with worse sleep outcomes because of the blue light rays [36,71,72,105]. Therefore, depending on the specific timing and type of SB, there may be different associations with aspects of sleep. The Actigraph measurements in our study did not allow us to define the type of SB. This could be an interesting direction for further research given the importance of the quality and timing of SB and its effect on sleep. Furthermore, the strength of association between PA and sleep differs between specific sleep outcomes, with WASO, sleep quality, sleep latency, and sleep disturbance showing the highest proportion of significant improvement after PA [9].
Strengths and Limitations
This study has several strengths and limitations. Strengths include the large sample size, the availability of subjective and objective sleep data, and the application of a novel statistical approach ISM to examine replacement effects.
Limitations include generalizability, the cross-sectional design, potential for type 1 error, and the use of one device that is wrist-worn. First, the study sample was recruited from one single socio-cultural organization that showed to be more physically active compared with the general population of older Belgian adults. Second, SB, PA, and sleep outcomes were measured at the same time in this cross-sectional design. Although we examined the associations of movement behaviours with sleep outcomes, we cannot exclude the possibility of bi-directional associations. Although SB and PA that were performed before bedtime could have affected sleep outcomes, we were not able to control for the exact timing of SB and PA in this analysis. Third, although we used a comprehensive set of objective and subjective sleep outcomes, we performed several tests in different models. Therefore, we cannot exclude the possibility of a type 1 error, assuming that, based upon a set α 0.05, there might be a 5% possibility that significant findings might be based on chance rather than on significance. The fact that the associations with sleep efficiency and the number of awakenings are also found in a previous study support real associations rather than type 1 errors. Fourth, we used one wrist-worn device (Actigraph) to measure SB, PA, and sleep outcomes. Despite the fact that there were no collinearity issues between movement behaviours and sleep data in this study, using wrist-worn devices to measure SB does not allow us to define the exact type of SB (i.e., sitting, reclining, or lying). If we had been able to collect information about the type of SB and examine reallocations between different types of SB, according to their sleep promoting characteristics with different intensities of PA, we could have found more specific associations for each type of SB.
Generalizability and Implications
The conclusions from this study apply to generally healthy older adults rather than to older adults with specific sleep problems or chronic conditions. Moreover, it should also be taken into account that our analyses were based on statistical modelling rather than on real-life changes in movement behaviours.
Conclusions
This study showed associations of time replacement by using isotemporal substitution analysis: (1) of replacing 30 min of LPA with MVPA with improved sleep efficiency; (2) of replacing 30 min from SB to MVPA with an increased number of awakenings; and (3) of replacing 30 min from SB to LPA with decreased sleep efficiency. There were no significant associations of time reallocations for WASO, length of awakenings, and sleep quality. Although it should be emphasized that we examined associations of modelled time reallocations with sleep, the results from this study improve our understanding of the interrelationships between different movement behaviours and sleep in older adults. Acknowledgments: The authors would like to acknowledge OKRA sport+ and their members for their invaluable help with the recruitment and participation in this study and Stef Van Puyenbroeck for his provided statistical support.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-12-24T09:08:34.846Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "d98921176b5025a66806dc0f7229f487d8035224",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/24/9579/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f33a81209a8108cdb758842cad0370d9e6c6d98",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237938523 | pes2o/s2orc | v3-fos-license | Numerical Simulation of Hybrid Nanofluid Mixed Convection in a Lid-Driven Square Cavity with Magnetic Field Using High-Order Compact Scheme
In this study, the energy transference of a hybrid Al2O3-Cu-H2O nanosuspension within a lid-driven heated square chamber is simulated. The domain is affected by a horizontal magnetic field. The vertical sidewalls are insulated and the horizontal borders of the chamber are held at different fixed temperatures. A fourth-order accuracy compact method is applied to work out the vorticity-stream function view of incompressible Oberbeck–Boussinesq equations. The method used is validated against previous numerical and experimental works and good agreement is shown. The flow patterns, Nusselt numbers, and velocity profiles are studied for different Richardson numbers, Hartmann numbers, and the solid volume fraction of hybrid nanoparticles. Flow field and heat convection are highly affected by the magnetic field and volume fraction of each type of nanoparticles in a hybrid nanofluid. The results show an improvement of heat transfer using nanoparticles. To achieve a higher heat transmission rate by using the hybrid nanofluid, flow parameters like Richardson number and Hartmann number should be considered.
Introduction
During past years, many efforts have been made to achieve the reasonable thermal efficiency of systems. Improvement of the heat transfer rate through a mixed convective flow [1] and adding nanoparticles are the part of these efforts [2]. The mixed convective circulation in a cavity with a heated lower wall has been investigated computationally by Moallemi and Jang [3]. They examined the influence of the Prandtl number on the rate of energy transference and flow dynamics. Iwatsu et al. [4] studied how heat was transferred inside a cavity with a temperature difference at the horizontal walls. In addition, in processes like casting and cooling of liquid metals, the magnetic field acts as an external force and affects the flow field and heat convection [5]. Many types of research have been studied to determine the significance of the hydromagnetic field on flow field and heat convection with various computational and analytical techniques. Chamkha [6] researched patterns of heat and flow by free convection of heat absorption and generation enclosures with a magnetic field. His results have shown that the magnetic strength strongly affects the heat convection and flow parameters of the chamber. Al-Salem et al. [7] studied the impact of the direction of a moving wall on magneto-hydrodynamic (MHD) mixed convection with different Grashof numbers, Hartmann numbers, and Reynolds numbers. They realized that flow field and heat convection are affected by the direction of lid movement and also empowering of the magnetic force causes a poor heat transfer. By applying a magnetic field, a Lorentz force in the opposite of the flow direction is generated. This magnetic force reduces the convective flow and heat transfer rate. The Lorentz force, which takes into account a body force with the Navier-Stokes equations, is a combination of electrical current and magnetic field. It should be noted that the interaction between the magnetic field and the flow depends on the fluid viscosity, conductivity, and flow characteristics [8]. In other words, when non-ferrofluids and non-ferromagnetic particles are used in a cavity, forces due to magnetization do not apply to the flow.
In some applications, such as magnetic storage media, magnetic sensors, and cooling systems, the presence of a magnetic field is unavoidable, so some researchers noticed that adding nanoparticles significantly enhances the heat transfer [9][10][11][12]. Balla et al. [9] considered the effects of different nanoparticles in an inclined square cavity, which was affected by the magnetic field. Mohebbi and Rashidi [10] proved that energy transference intensifies with growth of the volume fraction of Al 2 O 3 particles in an L-shaped chamber. Ma et al. [11] investigated the nanofluid natural convection in a baffled U-shaped enclosure in the presence of a magnetic field. They found that the rate of heat transfer is suppressed by the magnetic field. The mixed convection heat transfer of nanofluid flow in a vertical channel with sinusoidal walls under a magnetic field effect was investigated numerically by Rashidi et al. [12].
Another type of nanofluid that has recently received attention, is the hybrid nanofluid. Simultaneous combinations of a metallic nanoparticle with its non-metallic type increases the thermal conductivity as well as stability of the nanofluid [13]. In this way, the properties of two or three nanoparticles can be used. For example, metal nanoparticles have high thermal conductivity but can cause a chemical reaction in the fluid, while non-metallic nanoparticles have high stability despite low thermal conductivity [13]. Until now, many research studies have been done in the field of hybrid nanofluids. Moghadassi et al. [14] compared the specifications of Al 2 O 3 -H 2 O and Al 2 O 3 -Cu-H 2 O. They ascertained that the convective energy transference is far higher for the hybrid nanofluid. The heat conductivity and viscosity of Al 2 O 3 -Cu-H 2 O hybrid nanosuspension in a tube were analyzed by Suresh et al. [15]. They showed that energy transference is raised when the hybrid nanofluid is applied. In addition, Suresh et al. [16] studied the laminar flow in a heated tube filled with Al 2 O 3 -Cu-H 2 O hybrid nanofluid experimentally, and showed that the Nusselt number is increased in a hybrid nanofluid in comparison with pure water. Ghalambaz et al. [17] investigated an Ag-MgO/water hybrid nanofluid inside a square cavity. The effects of variation of the main parameters, such as the volume fraction of the nanoparticles and the Rayleigh number, were studied. The effect on the entropy production and MHD convection of the hybrid nanofluid Al 2 O 3 -Cu in a porous square enclosure was studied numerically by Abdel-Nour et al. [18]. They found that convective heat transfer becomes stronger with the enhancement of the Rayleigh number while it detracts with the rise in Hartmann number.
High-order mathematical simulation of incompressible Navier-Stokes equations has been performed by various researchers [19][20][21][22]. Garmann [21] has explored a sixth-order compact differencing method for solving incompressible flows such as steady lid-driven cavities and fluid flow around a cylinder. It was found that the presented method provided high accuracy solutions on coarse grids. In the current study, flow patterns and heat convection through a hybrid nanofluid are numerically studied. The energy equation and the vorticity-stream function formulation are computed by the high-order compact scheme. The influence of Richardson numbers, Hartmann numbers, and hybrid nano-sized particle concentration on the flow are studied comprehensively. The current research is arranged as follows. Section 2 expresses the governing equations. The numerical methodology is explained in Section 3. Section 4 gives the results of the selected problem. Finally, the conclusions are presented in Section 5.
Governing Equations
The continuity, momentum, and energy equations with thermal buoyancy and magnetic field are as follows [23]: ∂u ∂x where u and v are the fluid velocity along xand y-axes, p is the pressure, ρ is the density, T is the temperature, µ is the viscosity, α is the heat diffusivity, β is the coefficient of volumetric heat expansion, B 0 is the magnitude of the applied magnetic force, and σ is the electrical conductivity. Note that subscript hnf refers to the hybrid nanofluid properties. By nondimensionalizing Equations (1)-(4) with the following non-dimensional quan- H and taking the velocity components employing the dimensionless stream function ψ, defined as where subscript f denotes the base fluid properties.
The dimensionless parameters used are Prandtl number In the above Equations (5)- (7), the hybrid Al 2 O 3 -Cu-H 2 O hybrid nanosuspension density, specific heat, thermal expansion, and thermal diffusivity are given by [13]: The thermal conductivity of the hybrid nanosuspension k hnf and the electrical conductivity of the hybrid nanosuspension σ hnf are calculated by [24] as follows: The viscosity of the hybrid nanosuspension is calculated by the Brinkman model [25]:
Numerical Method
The derivatives in Equations (6) and (7) are calculated by the three-point fourth-order compact technique by the following tridiagonal system of equations [26]: where ϕ and ϕ are the first and second derivatives of any variable ϕ. The following fourth-order difference approximation is applied for the numerical simulation of Equation (5) [27]: Equation (17) is solved by using the under-relaxation technique.
The stream function on the boundary is set as zero and the Neumann boundary restrictions for θ are considered. Note that the vorticity is not determined on the boundary, so the numerical boundary of the vorticity needs to be presented. By solving Equation (5) on the wall, the following fourth-order discretization is obtained for the vorticity magnitude on the boundaries [28]: where V w is the tangential wall velocity. The convergence criterion is specified as follows: where n is the iteration number. Note that in this problem, the iteration continues until all three field variables ω, ψ, and θ reach the convergence criterion. As regards the Nusselt number, this can describe the heat convection specifications. Note that the average Nu on the top line is computed by the following equation:
Results and Discussion
In this section, a mixed convective motion with the applied Lorentz force is numerically simulated. The domain is a square chamber saturated with a hybrid Al 2 O 3 -Cu-H 2 O nanosuspension. A high-order in-house computational code was generated by the authors and was validated against numerical results that are available in the literature.
Problem Description and Boundary Conditions
The square cavity with constant different temperatures at the horizontal walls is displayed in Figure 1. The side boundaries are thermally insulated and the domain is affected by a uniform and horizontal magnetic force of strength B 0 . The top lid is moving to the right with velocity U 0 and the other three boundaries are motionless. The properties of water as the host liquid, and the Al 2 O 3 and Cu nano-sized particles are shown in Table 1.
Validation
To validate the proposed approach, the results were compared with the available experimental and numerical data from [23,29]. The experimental results of Krane and Jessee [29] for natural convection in a cavity filled with air are shown in Figure 2. It can be seen from the comparison that the current solution is in good agreement with the experimental data. The numerical benchmark problem is the square chamber with the moving top wall, which is saturated with a Cu-H 2 O nanosuspension. The vertical borders are kept at fixed temperatures and the left border is warmer than the right one. Furthermore, the horizontal walls are insulated. The isotherms acquired by the current code have been compared with those obtained by [23]. As can be seen in Figure 3, for Re = 100 and Ra = 1.47 × 10 4 , the agreement is good. Figure 4 shows the average Nu on the upper boundary in comparison with [23]. It is shown that the results are consistent with the aforementioned study.
Obtained Results and Analysis
Here, the fourth-order computational scheme is applied for simulation of the heat and flow in a chamber saturated with an Al 2 O 3 -Cu-H 2 O hybrid nanosuspension for different values of Ha, φ, and Ri. For all simulations, Gr = 100 was considered. First, a grid independent study was conducted using the fourth-order compact technique, and the results are shown in Figure 5. The distribution of vertical velocity V at Y = 0.5 shows that the curves overlap for 75 × 75 and more values. Hence, because of the computational cost, all the simulations were performed on a 75 × 75 grid size. The average Nusselt number on a hot wall was used as a sensitivity measure of the accuracy of the solution. Table 2 shows the effect of grid quality on the accuracy of the results. The percentage of error confirms that the grid of 75 × 75 elements is appropriate for the simulation. Note that the percentage of error was calculated based on the difference between present and pervious values of the Nusselt number. The effect of adding Al 2 O 3 -Cu and Al 2 O 3 nanoparticles to the streamlines and isotherms with Ri = 0.01 and Ri = 1 is shown in Figures 6 and 7. In these figures, the influence of the presence of the magnetic force is investigated. Generally, it can be said that the temperature lines tend to be parallel to magnetic influence. The primary vortex at Ha = 0 is divided into two and three vortices when the magnetic force is applied, because the kinetic energy of the fluid decreases with enhancement of the magnetic field. Indeed, the use of hybrid Al 2 O 3 -Cu nanoparticles causes a deformation of temperature lines. In all cases, the streamlines in the hybrid nanofluid and pure fluid stay close together. However, utilizing Al 2 O 3 nanoparticles dislocates the streamlines under magnetic impact. The dynamics of flow influenced by the magnetic force and hybrid solid volume fraction are displayed in Figure 10. Figure 10 shows vertical velocity profiles versus X-axis at Y = 0.5 for different Ha and φ. Figure 10 shows that as Ha increases, the vertical velocity decreases. It is obvious that magnetic influence makes the nanoparticles effective in the dynamics of flow in the cavity. In the vertical velocity profile, weak vortices in the center of the field are displaced by the presence of the Al 2 O 3 -H 2 O nanofluid and the magnetic field. However, by adding Cu nanoparticles, due to the fact that their electrical conductivity is high, they cause the vortices to return to the position of the vortices in the pure fluid, as shown in Figures 5 and 6. In addition, adding a higher percentage of Cu nanoparticles does not cause much change in the results. Figure 10 shows that as Ha increases, the vertical velocity decreases. It is obvious that magnetic influence makes the nanoparticles effective in the dynamics of flow in the cavity. In the vertical velocity profile, weak vortices in the center of the field are displaced by the presence of the Al2O3-H2O nanofluid and the magnetic field. However, by adding Cu nanoparticles, due to the fact that their electrical conductivity is high, they cause the vortices to return to the position of the vortices in the pure fluid, as shown in Figures 5 and 6. In addition, adding a higher percentage of Cu nanoparticles does not cause much change in the results.
Conclusions
In this paper, the mixed convective motion in a lid-driven heated chamber saturated with an Al2O3-Cu-H2O hybrid nanosuspension and affected by the horizontal magnetic force was numerically simulated. The vorticity-stream function statement was calculated using a high-order compact technique. The results were validated by the available numerical simulations and experimental data. The fluid flow properties and heat convection for various Hartmann numbers (Ha = 0-60) and hybrid nanoparticle volume fractions (ϕ = 0-
Conclusions
In this paper, the mixed convective motion in a lid-driven heated chamber saturated with an Al 2 O 3 -Cu-H 2 O hybrid nanosuspension and affected by the horizontal magnetic force was numerically simulated. The vorticity-stream function statement was calculated using a high-order compact technique. The results were validated by the available numerical simulations and experimental data. The fluid flow properties and heat convection for various Hartmann numbers (Ha = 0-60) and hybrid nanoparticle volume fractions (φ = 0-0.05) for Ri = 0.01 and Ri = 1 were obtained and the following conclusions were reached. − The energy transference intensity and consequently the Nusselt number were diminished with increments of Ha and Ri. − Isotherm patterns were reshaped with the presence of the hybrid nanoparticles for a lower Richardson number. − Inclusion of Al 2 O 3 nanoparticles improved the energy transference performance for all studied Ri and Ha, but adding Cu nanoparticles to the nanofluid at lower Ri was highly effective, and at higher Ri there was no significant effect. − The magnetic field intensified the influence of nano-sized particles on the liquid dynamics. − According to the results, applying the hybrid nanoparticles did not always enhance the heat transfer rate, which means that the other parameters, such as the Richardson number, can affect the presence of hybrid nanoparticles. | 2021-09-24T15:17:06.869Z | 2021-08-31T00:00:00.000 | {
"year": 2021,
"sha1": "9673d35cb6aa2bc9878128090095178553d10809",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/11/9/2250/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c730b51e6edba4fd7267fc52a2572f8a60e56856",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199407660 | pes2o/s2orc | v3-fos-license | Massive star clusters as the an alternative source population of galactic cosmic rays
Extended γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma $$\end{document}-ray emissions in the vicinity of young star clusters are believed to be produced by the interaction of CRs accelerated therein with the ambient gas. The detailed spatial analysis reveals 1/r type distribution of CRs, which indicates a continuous injection of CRs in these objects. The hard, ∝E-2.3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\propto E^{-2.3}$$\end{document} type power-law energy spectra of parent protons continue up to ∼\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sim $$\end{document} 1 PeV. The efficiency of conversion of kinetic energy of powerful stellar winds to CRs can be as high as 10%. This implies that the young massive stars can operate as effective proton PeVatrons with a major or even dominant contribution to the flux of highest-energy galactic CRs.
Introduction
The origin of cosmic rays (CRs) remains as a mystery after these relativistic particles were discovered for more than one century. The current paradigm postulates that the bulk of the CRs are accelerated by supernova remnants (SNRs). In this regard, the -ray observations provide us indirect evidence on the CRs acceleration in SNRs. The detection of the so-called 0 -decay bump in the spectra of several midage SNRs (F.L. Collaboration 2013; Giuliani et al. 2011) is considered as a substantial evidence of acceleration of protons and nuclei in SNRs. Furthermore, the detection of more than a dozen of young (a few thousand years old or younger) SNRs in TeV -rays (H. Collaboration 2006;H. Collaboration, H.E.S.S. 2007) highlights these objects as efficient particle accelerators. However, the mid-age SNRs reveal a break of CR spectra at about 100 GeV; while, the very origin of the -rays in young SNRs (hadronic or leptonic) is still not clear.
On the other hand, the recent measurements of 60 Fe abundance in CRs (Binns et al. 2016) indicate that a substantial fraction of Cosmic rays (CRs) could be accelerated in young OB star clusters and related super bubbles. The measurements of the Galactic diffuse -ray emissions also show that the CRs have a similar radial distribution to OB stars rather than SNRs (Acero et al. 2016;Yang et al. 2016). The interacting winds of massive stars have been recognised as potential CR accelerators as early as in the 1980s. The acceleration could take place in the vicinity of the stars (Cesarsky and Montmerle 1983) or in superbubbles, multi-parsec structures caused by the collective activity of massive stars (Bykov and Toptygin 1982;Parizot et al. 2004). The acceleration of multiple shocks can raise the maximum energy of CR protons out of 1PeV (Klepach et al. 2000) which makes the stellar clusters the attractive candidates for cosmic PeVatrons. The accelerated CRs with the surrounding gas should produce secondary -ray emissions. The diffuse GeV -rays detected by Fermi LAT around the compact clusters Cygnus OB2 (Ackermann et al. 2011), NGC 3603 (Yang andAharonian 2017) and Westerlund 2 (Yang et al. 2018) can be naturally interpreted within this scenario.
The spatial distribution of CRs is a powerful tool to diagnose the injection history and acceleration site of CRs (Aharonian and Atoyan 1996;HESS Collaboration 2016). This method relies on the accurate measurement of the spatial distribution of both -ray and gas. It has been successfully applied to the diffuse TeV -ray emission of the Central Molecular Zone (CMZ) in the Galactic Centre (GC) (HESS Collaboration 2016). While the hard spectra of -rays extending to energies of tens of TeV, indicate the presence of a proton PeVatron(s) in CMZ, the 1 / r type radial distribution of parents protons revealed up to ∼ 200 pc, points to the continuous operation of proton PeVatron(s) located within the central 10 pc of our Galaxy. We argue that the compact stellar clusters, Arches, Quintuplet and Nuclear in GC, could be alternative sites for the CR acceleration. In this paper, we also explored the possibility of extraction of spatial distributions of CRs in the proximity of other two clusters, Cygnus OB2 and Westerlund 1.
-ray observations
The extended GeV -ray source around the cluster NGC 3603 (Yang and Aharonian 2017) and the TeV -ray source associated with 30 Dor C (H.E.S.S. Collaboration 2015) are too weak for derivation of statistically significant radial distributions of CRs. On the other hand, the angular size of the diffuse GeV source associated with Westerlund 2 is too large to be detected with the current atmospheric Cherenkov telescopes. Fortunately, in the case of Cygnus Cocoon discovered by the Fermi LAT collaboration as a bright extended -ray source associated with Cyg OB2 (Ackermann et al. 2011), the photon statistics is sufficient for derivation of spectral and spatial distributions of CRs. It is also important that -ray emission of this source extends to TeV energies (ARGO-YBJ Collaboration 2014). The same is true also for the extended TeV -ray emitter HESS J1646-458 apparently linked to Westerlund 1 (H. Collaboration 2012). For Cygnus Cocoon, we analysed Fermi LAT data using the standard LAT software package. For HESS J1646-458, we used the radial profiles published by the H.E.S.S collaboration (H. Collaboration 2012). For the distribution of molecular hydrogen, we applied the data from the CO galactic survey performed by the CfA 1.2m millimetre-wave Telescope; while for the atomic hydrogen, we used the data from the Leiden/Argentine/Bonn (LAB) Survey. We also use the results for Westerlund 2 from (Yang et al. 2018) in this work.
The main conclusion is that the CR density declines as r −1 up to ≈ 50 pc from both stellar clusters. The results are shown in Fig. 1b, together with the earlier published radial distributions of CR protons in CMZ (HESS Collaboration 2016). In Fig. 1a, we show the differential -ray luminosities of the extended sources associated with Cyg OB2, Westerlund 1 and CMZ. The energy distributions of -rays are quite similar; dN∕dE ∝ E − type differential energy spectra with power-law index ≈ 2.2 extend to 10 TeV and beyond without an indication of a cutoff or a break. The -rays are likely to originate from interactions of CRs with the ambient gas through the production and decay of neutral -mesons (see below). Because of the increase of the 0 -meson production cross-section with energy of incident protons and nuclei, the spectrum of secondary -rays appears slightly harder compared to the spectrum of parent protons, ≈ p − 0.1 (Kelner et al. 2006), thus the power-law index of the proton distribution should be p ≈ 2.3.
CR radial distribution
The apparent similarity of the radial ( ∝ r −1 ) and energy ( ∝ E −2.3 ) distributions of CR protons for different stellar clusters is a hint that we observe the same phenomenon. The most natural explanation of the 1 / r dependence of CR radial distribution is that relativistic particles have been continuously injected into and diffused away in the interstellar medium (ISM). The characteristic time scale is determined by the time of propagation of CRs over the typical distances of tens of parsecs, t ∼ R 2 ∕D . Formally, in the case of a large diffusion coefficient D (i.e., fast diffusion), it could be as short as 10 3 year. However, this would imply very low efficiency of conversion of CRs to -rays. Given the tight energy budget, the diffusion time cannot be much shorter than the age of the stellar cluster (see below), t ≥ 10 6 year. On the other hand, the acceleration of multi-TeV CRs in an individual SNR cannot last more than 10 4 year (see, e.g., Bell et al. 2013). Thus, to support the quasi-continuous CR injection, an unrealistically high rate of ∼ 1 SN per 100 year in the cluster is required. This disfavors SNRs and gives preference to massive stellar winds as particle accelerators.
In the case of spherically symmetric diffusion, the CR density at a distance from the central source r depends on the injection rate Q (E) and the diffusion coefficient: w(E, r) ∝Q(E)∕rD(E) , i.e., the 1 / r profile is independent of the absolute value of the diffusion coefficient unless the latter varies dramatically over the scales of tens of parsecs.
The above relations are valid when the energy losses of CRs can be neglected. While for CR protons and nuclei this is a fully justified assumption, relativistic electrons undergo severe energy losses. However, electrons cannot be responsible for the observed -ray images. The leptonic (inverse Compton; IC) origin of -rays is excluded both at GeV and TeV energies. Firstly, the propagation of multi-TeV ( ≫ 10 TeV) electrons in the ISM could hardly exceed 100 pc (Atoyan et al. 1995). Moreover, inside a typical cluster of a radius less than 3 pc and the overall starlight luminosity of L r ≈ 10 40 erg/s , the energy density of optical photons exceeds u r ∼ L∕4 r 2 c ≈ 100 eV/cm 3 . Outside the cluster, u r decreases as 1∕r 2 , thus, up to tens of parsecs, it dominates over the average radiation density in the Galactic plane. Therefore, in the case of IC origin of -rays, we would expect a sharp increase of the -ray intensity towards a bright central source coinciding with the cluster. The brightness distributions of the observed -ray images of objects discussed in this paper do not agree with this prediction. It is convenient to write the radial distribution of the CR density in the form Below, we will adopt r 0 = 10 pc, i.e., normalise the CR proton density w 0 outside but not far from the cluster. Then, the (1) w(r) = w 0 (r∕r 0 ) −1 .
total energy of CR protons within the volume of the radius R 0 is F r o m w 0 l i s t e d i n Ta b l e 1 , w e o b t a i n W p ≈ 1.5 × 10 51 , 3.2 × 10 49 , 1.4 × 10 48 , 2.3 × 10 49 e r g s for Westerlund 2 Cocoon, Westerlund 1 Cocoon, Cygnus Cocoon, and CMZ, respectively. This estimate strongly depends on the value of R obs which is determined by the brightness of the -ray image. The extensions of the large diffuse structure depend on the detector's performance, the level of the background, etc. Thus, the content of CR protons within R obs does not provide information about all CRs injected into ISM. The latter can be calculated by integrating The CR proton radial distributions in Cyg Cocoon and Wd 2 Cocoon above 100 GeV, and in Wd 1 Cocoon and CMZ above 10 TeV. The -ray flux enhancement factor due to the contribution of CR nuclei was assumed = 1.5 . For comparison, the energy densities of CR protons above 100 GeV and 10 TeV based on the measurements by AMS are also shown Aguilar et al. (2015) Eq.
(1) up to the so-called diffusion radius R D , the maximum distance penetrated by a particle of energy E during the time T 0 . In the case of negligible energy losses of propagating particles, where D 30 is the diffusion coefficient of protons in the units of 10 30 cm 2 ∕s , and T 6 is T 0 normalised to 10 6 years. The ages of the individual clusters vary in a narrow range between 2 and 7 Myr (see Table 1). In the source neighbourhood, the diffusion coefficient cannot be very large; otherwise, the demand on the total energy in CRs would exceed the available energy contained in the stellar winds, where L 39 = 10 39 L 0 is the total mechanical power of the stellar winds in units of 10 39 erg/s , and f is the efficiency of conversion of the wind kinetic energy to relativistic protons with energy larger than 10 TeV. Substituting R 0 = R D into Eq.
(2), we obtain The large resolved size (300 pc) and the large CR density in Westerlund 2 ( w 0 = 6 eV/cm 3 ), combined with the well-known age ( 2 × 10 6 year) and the available energy budget in the form of kinetic energy of stellar winds ( 2 × 10 38 erg/cm 2 s ), robustly constrain the CR acceleration efficiency in the cluster and the diffusion coefficient in its Cocoon. Indeed, from the obvious condition R obs ≤ R D , Eq.
(3) gives D 29 ≥ 0.04. Substituting this lower limit into Eq. (5), we obtain f ∼ 0.1 , i.e., the acceleration efficiency should be as large as 10 percent. Actually, depending on the shape of the CR spectrum below 100 GeV, the lower limit for f could be by a factor of few higher. This implies that for any reasonable acceleration efficiency, the upper and lower limits on the diffusion coefficient shrink its value to few times 10 27 cm 2 ∕s , significantly smaller than the diffusion coefficient in the interstellar medium. The requirements to the parameters of other clusters are less stringent because of the smaller values of the (resolved) extensions of -ray sources and/or the lower CR densities. However, the reported angular extensions of -ray sources (4) W tot = fL 0 T 0 = 3 × 10 52 fL 39 T 6 erg, could not be used as unbiased measures of the real physical size of the object. The parameter w 0 experimentally derived from the detected images of -rays is the more objective quantity. In particular, the comparable values of w 0 in Westerlund 2 Cocoon and Westerlund 1 Cocoon (taking into account that the densities in these objects are derived in different energy bands) seem quite natural given the almost identical parameters of characterising these two clusters of massive stars. On the other hand, the significantly low level of the CR density in Cygnus Cocoon and CMZ can be explained either by the low efficiency of conversion of the kinetic energy of the winds to CRs, and/or faster diffusion of relativistic particles in these objects. The case of CMZ is especially interesting, given that the total kinetic energy power in three ultracompact (Arches, Quintuplet and Nuclear) clusters is L ≃ 10 39 erg/s , i.e., exceeds by an order of magnitude the overall stellar wind power in Westerlund 1 and Westerlund 2. One can see from Eq. (5) that the acceleration efficiency could significantly, in principle, exceed 1 percent, provided that the diffusion coefficient in CMZ is much larger than in Westerlund 2 Cocoon. Although this cannot be a priori excluded, the alternative assumption regarding the low efficiency of CR acceleration seems a more likely option given the unusual nature of these ultracompact clusters where tens of massive OB stars are packed within the few pc linear size regions. Actually, the acceleration efficiency exceeding 10%, as derived for Westerlund 2, should not be typical for all star clusters. Otherwise, it would lead to overproduction of CRs, given that the overall kinetic energy power of massive stellar winds exceeds 10 42 erg/s.
The spectra of CR protons inside of all three diffuse -ray sources are described by power-law energy distributions with an index p ≈ 2.3 . It is formed from the initial (acceleration) spectrum of protons, Q (E) ∝ E − 0 , but can be modified due to the energy-dependent diffusion, J p (E, r) ∝Q(E)∕D(E)r −1 . For the Kolmogorov type turbulence, D(E) ∝ E 1∕3 , we arrive at a "classical" E −2 type acceleration spectrum. One cannot, however, exclude that at energies ≥ 10 TeV, the diffusion slightly depends on the energy. Even in this extreme case, the acceleration spectrum of protons would be relatively hard with 0 = p ≈ 2.3 . The hard -ray spectra of (Figer 2008) 1.5-2.5 3-6 2-7 4-6 L kin of cluster (erg/s) 2 × 10 38 (Rauw et al. 2007;Reimer et al. 2008) 2 × 10 38 (Ackermann et al. 2011) 1 × 10 39 (Muno et al. 2006) 1 × 10 39 (Hußmann 2014) Dist ( Westerlund 1 Cocoon and CMZ continue up to 20-30 TeV without an indication of a cutoff or a break. Correspondingly, the energy spectra of parent protons should not break at least until 0.5 PeV (see Fig. 1a). This makes the clusters of massive stars potential sources of multi-TeV neutrinos with a fair chance to be detected by the cubic-km volume neutrino detectors. In particular, Westerlund 1, which has the highest -ray flux at 20 TeV, seems to be a promising target for neutrino observations (Bykov 2014).
Conclusion
The main contributors to the stellar wind are O stars and WR stars due to their considerable mass loss rate and high wind velocity. The power of a single O star is of the order 10 36 − −10 37 erg/s , depending on the mass; while, the power of WR star can be as high as 6 × 10 37 erg/s (Cesarsky and Montmerle 1983; Parizot et al. 2004). The total power of the O stars and WR stars in our Galaxy is estimated to be 3 × 10 41 erg/s using the initial mass function of O stars and the star counts of WR stars in the solar neighbourhood (Cesarsky and Montmerle 1983). The total CR injection power in our Galaxy is estimated to be between 6 × 10 40 erg/s and 3 × 10 41 erg/s . Thus, the stellar wind can provide 10% to 50 % of the CRs in our Galaxy if 10% of the mechanical energy can be converted to CRs. Thus, we argue that the stellar clusters offer a viable solution to the longstanding problem of the origin of Galactic CRs with massive/luminous stars as major contributors to observed fluxes of CRs up to the knee around 1 PeV. Furthermore, the multi-TeV -ray observations provide evidence that the clusters of massive stars operating as PeVatrons may substantially contributing to the flux of galactic CRs. The extension of spectrometric and morphological -ray measurements up to 100 TeV in the energy spectrum and up to several degrees in the angular size from regions surrounding powerful stellar clusters would provide crucial information about the origin of CRs in general, and the physics of proton PeVatrons, in particular. Such observations with the Cherenkov Telescope Array will be available in coming years. | 2019-08-05T17:14:23.239Z | 2019-07-30T00:00:00.000 | {
"year": 2019,
"sha1": "e8d30f23d3c9dbddbed856354378f041c52f572a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12210-019-00819-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d379ef1ce37746750105e3362a6dace09e075743",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
233190263 | pes2o/s2orc | v3-fos-license | A Promising Solution for Food Waste: Preparing Activated Carbons for Phenol Removal from Water Streams
Phenol and its derivatives are highly toxic chemicals and are widely used in various industrial applications. Therefore, the industrial wastewater streams must be treated to lower the concentration of phenol before discharge. At the same time, food waste has been a major environmental problem globally and the scientific community is eagerly seeking effective management solutions. The objective of this study was to understand the potential of utilizing food waste as a renewable and sustainable resource for the production of activated carbons for the removal of phenol from water streams. The food waste was pyrolyzed and physically activated by steam. The pyrolysis and activation conditions were optimized to obtain activated carbons with high surface area. The activated carbon with the highest surface area, 745 m2 g–1, was derived via activation at 950 °C for 1 h. A detailed characterization of the physicochemical and morphological properties of the activated carbons derived from food waste was performed and a comprehensive adsorption study was conducted to investigate the potential of using the activated carbons for phenol removal from water streams. The effects of pH, contact time, and initial concentration of phenol in water were studied and adsorption models were applied to experimental data to interpret the adsorption process. A remarkable phenol adsorption capacity of 568 mg g–1 was achieved. The results indicated that the pseudo-second-order kinetic model was better over the pseudo-second-order kinetic model to describe the kinetics of adsorption. The intraparticle diffusion model showed multiple regions, suggesting that the intraparticle diffusion was not the sole rate-controlling step of adsorption. The Langmuir isotherm model was the best model out of Freundlich, Temkin, and Dubinin–Radushkevich models to describe the phenol adsorption on activated carbons derived from food waste. This study demonstrated that food waste could be utilized to produce activated carbon and it showed promising capacity on phenol removal.
■ INTRODUCTION
Phenol and its derivatives are widely used chemicals in industries, such as petroleum, pharmaceutical, leather, pesticide, paper, and plastic industries. 1−4 These chemicals are classified as priority pollutants because of their high toxicity and low biodegradability. 5,6 Excessive exposure to phenolic compounds may cause negative effects on the brain, eyes, liver, skin, and other parts of humans. 7 The discharge of wastewater containing phenolic compounds is also harmful to aquatic life, which may cause oxygen depletion in water. 8 The United States (U.S.) Environmental Protection Agency (EPA) regulations set a restriction of less than 1 mg L −1 on phenol concentration in wastewater. 9 Therefore, the wastewater streams from industrial sides must be treated to lower the concentration of phenolic compounds prior to being discharged into the environment. 6,10 The treatment approaches of wastewater containing phenolic compounds include physical, chemical, and biological techniques. Photocatalysis, 11 coagulation, 12 electrochemical 13 and chemical oxidation, 14 and adsorption 5,15−17 are the most widely used techniques for removing phenolic compounds. Adsorption by activated carbons is considered the most favorable owing to their low cost, high efficiency, simplicity, and high availability. 5,7,15 Activated carbon (AC) is a carbonaceous material that has a high surface area and large pore volume, which is widely used for the adsorption of pollutants. The resources to prepare AC can be categorized into two groups: (a) fossil-related resources, such as coal, peat, lignite, and petroleum residues, which are nonrenewable and not environmentally friendly, 18 and (b) bioresources, such as agricultural waste and lignocellulosic materials. 18 Although the carbon contents of fossil resources are higher compared to biomass, leading to a higher yield of AC, the overall cost of AC produced from biomass is lower due to the low feedstock price and ecological suitability. 19 Therefore, producing AC from bioresources is preferred considering its sustainability. There are two approaches to convert biomass into AC: (a) physical activation and (b) chemical activation. The physical activation is a two-step process: pyrolysis (or carbonization) followed by activation. In the first step, the feedstock is pyrolyzed under an inert atmosphere and turned into biochar. 20−24 The pyrolysis temperature can vary from 400 to 900°C. 25 −29 In the second step, the biochar derived from pyrolysis is activated at temperatures ranging from 500 to 900°C in the presence of steam or carbon dioxide. 25−29 The chemical activation is carried out with the assistance of chemical agents, such as ZnCl 2 , 30−32 H 3 PO 4 , 33−36 H 2 SO 4 , 37 KOH, 38 NaOH, 39 and K 2 CO 3 . 40 Among all these chemicals, ZnCl 2 and H 3 PO 4 are the most commonly used activation agents. The chemical activation is commonly performed at temperatures between 450 and 600°C. 18 The atmosphere for chemical activation can be either inert gas 41 or air. 35 After chemical activation, usually, a washing process is needed to remove the chemical agents from the desired products. Both physical activation and chemical activation have their own advantages and disadvantages. ACs produced via physical activation are cleaner than those produced by chemical activation and do not need a washing process. Furthermore, physical activation avoids using corrosive or environmentally unfriendly chemical agents. 40 On the other side, chemical activation is commonly performed in one step at relatively low temperatures and results in AC with higher surface area and larger pore volume. Various biomass resources have been studied as feedstocks for the production of renewable AC, such as tomato waste, 41 corncob, 42 palm oil kernel shell, 43,44 grape stalk, 45 apple waste, 46 coconut shell, 47 and chestnut oak shell. 48 Similar to the aforementioned biomass resources, food waste has a great potential to produce AC. The Food and Agriculture Organization of United Nations (FAO) has pointed out that nearly 1.3 billion tons of food is thrown away during production and consumption every year. 49 Food waste is mainly composed of carbohydrates, lignin, proteins, organic acids, lipids, and ash. 50 The traditional management of food waste includes feeding animals, composting, incineration, or landfilling. 51 However, major efforts are implemented toward the utilization of food waste to produce bioenergy, biofuels, and other bioproducts. 52,53 The technologies of turning food waste to energy can be categorized into two main groups: biological and thermochemical. 54 Biological technologies include anaerobic digestion and fermentation; the former technology produces biogas, while the latter is utilized to generate bioethanol. Thermochemical treatments involve incineration, pyrolysis, gasification, and hydrothermal carbonization. Incineration is utilized to generate heat and energy. However, this treatment can cause major environmental problems. 55 Furthermore, food waste is not suitable for combustion due to its high moisture content, which results in low heat density. 54 Hydrothermal carbonization can be applied to convert food waste into high-carbon and highenergy-density material, named hydrochar. 56 However, intense energy requirements are a major barrier toward the commercialization of this process. 57 Pyrolysis converts food waste into multiple products, such as syngas, biochar, and biooil, while gasification results mainly into syngas. Both of the aforementioned processes are considered appropriate for food waste utilization. 54 Recently, renewable activated carbons derived from various biomass resources have been tested for the removal of phenols from aqueous solution. Black wattle bark waste was investigated by Luẗke et al. for the production of activated carbons. 58 The activation was performed using ZnCl 2 as the activation agent and the maximum phenol removal capacity reached 98.6 mg g −1 . Lv et al. prepared activated carbons from rice husk via a two-step activation process using KOH and EDTA-4Na as activation agents. 59 4 and in a fluidized bed reactor 10 that the removal efficiency was able to reach 96%. These studies have shown that renewable bioresources and wastes have a great potential as activated carbons and can be effectively employed for phenol removal.
The abundance and the need for proper food waste management justify the investigation of using this waste as a potential renewable resource to produce valuable bioproducts, such as AC. The objective of this study is to prepare highsurface-area AC from food waste and test its potential for phenol adsorption in aqueous solutions. We should note that the use of renewable biomass resources for the production of activated carbons is not a new concept. However, most of the studies related to the preparation of activated carbons from food waste were actually using a specific agricultural waste as a starting material (e.g., orange peels, olive stones, palm shells, coffee grounds, coconut shells, etc.). 4,63−66 In this study, we use food waste derived from the dining halls located at the University of Connecticut (UConn); thus, this waste resource represents a complex mixture of various types of food. The generated activated carbons in this study were further applied for phenol adsorption from water streams. Lee et al. have studied the adsorption of phenol using biochar derived from food waste but not activated carbons. 67 Hence, it is the first time that a food waste complex is used for the preparation of activated carbons and applied for phenol removal from water. This study suggested that the food waste complex has a great potential to be converted into valuable activated carbons that can be effectively used for purification purposes.
Preparation of Biochar and Activated Carbons. ACs from food waste were produced via a two-step physical activation process. First, food waste was pyrolyzed to produce biochar. Typically, 3 g of washed and dried food waste was sandwiched by two pieces of quartz wool and placed at the center of a quartz tube. The quartz tube was then inserted into a vertical tube furnace. Ar was used to provide an inert atmosphere with a flow rate of 50 sccm. Ice-bathed methanol was used to absorb bio-oil that was generated during pyrolysis. Mass spectroscopy (Agilent 5975C) was used to measure the gas formed during pyrolysis. The pyrolysis was performed at a temperature range of 275 to 525°C with a ramp rate of 10°C min −1 . The residence time was varied from 30 to 120 min. After the pyrolysis of food waste, the derived biochar was ground and meshed to a particle size smaller than 300 μm. The biochar samples were labeled as "Biochar-pyrolysis temperature-residence time". The biochar with the highest carbon content was selected to prepare AC via physical activation.
After the pyrolysis, biochar was activated in a horizontal tube furnace. The biochar was put into an alumina boat and placed at the center of the furnace. Ar was employed as a carrier gas to carry steam from a saturator. A controller was used to control the temperature of water in the saturator to keep the partial pressure of steam at approximately 50%. The activation temperature was varied from 750 to 950°C, while the residence time was varied from 1 to 5 h. The flow rate of Ar was set to 50 sccm and the ramp rate of temperature was kept at 10°C min −1 . Steam-activated ACs were labeled as "FWACactivation temperature-residence time".
Characterization of Biochar and AC. Surface areas and porosities of biochar and AC were determined by N 2 adsorption−desorption using a Micromeritics ASAP 2020C Sorption Analyzer. All materials were degassed for 12 h at 120°C under vacuum. N 2 adsorption−desorption isotherms were then gathered at 77 K under a liquid nitrogen environment. Surface areas of samples were calculated using the Brunauer− Emmett−Teller (BET) method, while pore volumes were calculated using the single-point method below P/P 0 = 0.99.
A scanning electron microscope equipped with an energydispersive X-ray spectroscopy (EDX) detector was conducted to study the morphologies and regional element distributions of food waste, biochar, and AC. Scanning electron microscopy (SEM) was performed using an FEI Quanta FEG 250 scanning electron microscope operating at a potential of 10 kV.
Elemental composition of food waste, biochar, and AC was analyzed by elemental analysis and inductively coupled plasma optical emission spectroscopy (ICP-OES). Elemental analysis was applied to measure the content of carbon, hydrogen, nitrogen, and sulfur using an Elementar Vario Microcube analyzer. Noncombustible element concentrations (calcium, phosphorus, and sodium) were measured by ICP-OES using a Thermo Scientific iCAP 6500.
X-ray diffraction (XRD) patterns for biochar and AC were obtained using a Bruker D8 Advance powder diffractometer (CuKα radiation source). Chemical structures of biochar and AC were identified by 13 C nuclear magnetic resonance (NMR) and Fourier transform infrared spectroscopy (FTIR). The solid-state magic angle spinning (MAS) 13 C NMR spectra were acquired using a Bruker Advance III spectrometer. The diffuse reflectance FTIR (DRIFTS) spectra were collected on a Thermo Nicolet 6700 FTIR spectrometer with an MCT detector and a temperature-controlled Harrick Praying Mantis DRIFTS assembly. Samples were analyzed at 100°C to exclude the effects of water, and all the samples were diluted in KBr. Temperature-programmed desorption (TPD) was conducted from 60 to 1000°C at a heating rate of 10°C/ min under an Ar atmosphere using a tube furnace (Lindburg/ Blue M) and the generated gases were analyzed by mass spectroscopy (Agilent 5975C); the results are shown in the Supporting Information.
Phenol Adsorption Experiments. Phenol adsorption experiments were performed using the prepared AC with the highest surface area. A phenol stock solution with a concentration of 1 g L −1 was prepared by dissolving phenol crystals into DI water. The phenol solutions with lower concentration were prepared by diluting the stock solution to the desired concentration. The batch adsorption experiments were performed with different initial phenol concentrations ranging from 10 to 500 mg L −1 . For a typical experiment, 10 mg of dried AC was added to 50 mL of phenol solution. The duration of adsorption was varied to obtain kinetic data. After adsorption, the liquid samples were collected by filtration. All the experiments were repeated three times. Adsorption experiments were also conducted at different pH levels. All the experiments were performed under room temperature (25°C ) with a stirring rate of 200 rpm. The concentration of phenol in liquid samples was measured via ultraviolet−visible spectroscopy (UV−vis, Shimadzu UV-2600) using the peak height at 283 nm.
The adsorption capacity of AC at different times (q t ) was calculated by eq 1 where V (L) is the volume of phenol solution used for each experiment, C 0 (mg L −1 ) is the initial concentration of phenol, C t is the concentration of phenol at time t (mg L −1 ), and m (g) is the amount of AC used for each experiment. The adsorption capacity of AC at equilibrium (q e ) was calculated by eq 2 where C e (mg L −1 ) is the concentration of phenol at equilibrium. Pseudo-first-order and pseudo-second-order models were employed to describe the adsorption kinetics in the study. The pseudo-first-order kinetic model can be expressed by eq 3 68 where q t (mg g −1 ) is the adsorption capacity at time t, q e (mg g −1 ) is the theoretical adsorption capacity at equilibrium, and k 1 (min −1 ) is the rate constant of the pseudo-first-order kinetic model. The pseudo-second-order kinetic model may be expressed by eq 4 68 where k 2 (g mg −1 min −1 ) is the rate constant of the pseudosecond-order kinetic model. To better understand the rate-determining step of the adsorption process, the intraparticle diffusion model was applied for the phenol adsorption process by FWAC. The Weber and Morris (1963) model has been widely used to describe intraparticle diffusion process and it is expressed as 10,69 where q t is the amount adsorbed at time t, and k id (mg g −1 min −1/2 ) is the intraparticle diffusion constant. The constant k id is derived by plotting q t vs t 1/2 and conducting a linear fitting.
The adsorption equilibrium isotherms were derived by adopting Langmuir and Freundlich isotherm models. The Langmuir isotherm model is commonly applied to model monolayer adsorption on homogeneous adsorbent surfaces, while the Freundlich isotherm model is used for heterogeneous sorption surfaces with nonuniform energy distribution. 70,71 The Langmuir and Freundlich isotherm models are expressed by eqs 6 and 7, respectively 4 where q e (mg g −1 ) is the adsorption capacity at equilibrium, C e (mg L −1 ) is the concentration of phenol at equilibrium, Q 0 (mg g −1 ) is the maximum adsorption amount of the monomolecular layer, and K L (L mg −1 ) is the Langmuir constant related to adsorption energy. K F (mg g −1 ) is the Freundlich constant and 1/n is a constant related to adsorption intensity.
In addition to Langmuir and Freundlich isotherm models, the Temkin and Dubinin−Radushkevich (D−R) isotherm models are also well-known models for the adsorption of activated carbons. The Temkin model assumes that the energy of adsorption decreases linearly with the coverage of adsorbent surface due to adsorbent−adsorbate interactions. 4,72,73 The general form of the Temkin isotherm model is expressed as 4,73,74 where T is the temperature (298 K), R is the universal gas constant (8.314 J mol −1 K −1 ), b T (J mol −1 ) is the Temkin isotherm constant, and K T (L mg −1 ) is the equilibrium binding constant.
The D−R isotherm model is based on the adsorption potential theory and assumes that the adsorption process is due to the pore-filling mechanism as opposed to layer-by-layer adsorption. 4,75 The model can be expressed as 4,74,75 where q m (mg g −1 ) is the maximum adsorption capacity, β is the activity coefficient related to mean free adsorption energy (mol 2 kJ −2 ), and ε is the Polanyi potential (kJ mol −1 ), which is expressed as Regression Analysis. To minimize the error of fitted parameters and obtain a better fit to the experimental data, 76 the nonlinear regression was applied to obtain the parameters of the models using MATLAB. Studies have shown that nonlinear models provide more accurate results than linear models, which can lead many times to misleading conclusions. 4,77,78 The fit qualities were evaluated by coefficient of determination (R 2 ) and average relative error (ARE) expressed as eqs 11 and 12, respectively 79 where q i, exp is the experimental value of q (mg g −1 ), q ̅ i, exp is the average value of all q i, exp , q i, model is the value of q predicted by the fitted model, and n is the number of data points measured in experiments.
■ RESULTS AND DISCUSSION
Yields of Pyrolysis and Activation. A set of pyrolysis experiments were performed at different temperatures and residence times to reveal the effects of the operating conditions on the yield and carbon content of biochar from food waste. Figure S1 shows the biochar, sludge (the viscous and dark biooil left in the reactor), gas, and liquid yields as a function of pyrolysis temperature. Figure S2 shows the aforementioned yields as a function of pyrolysis residence time. The biochar yield decreased with increasing temperature of pyrolysis. A longer residence time also resulted in slightly lower biochar yields. Elemental analysis was conducted for all the produced biochars, and the results are listed in Table S1. The results show that upon increasing the pyrolysis temperature, the carbon content of biochar increased. A longer residence time showed a positive effect on the carbon content of biochar, which became stable after 60 min of pyrolysis. The carbon content of biochar was as high as 71.9% when food waste was pyrolyzed at 525°C for 120 min. Therefore, despite the lower yields, these conditions were chosen to produce biochars as the precursor of AC from food waste.
To understand the effects of activation conditions on the surface area and porosity of the produced AC, the temperature and residence time were varied during activation. Thus, steam activation was conducted at a temperature range from 750 to 950°C at a constant residence time of 3 h. From that set of experiments, the activation temperature that resulted in the AC with the highest surface area was selected. Keeping the activation temperature constant, further experiments were contacted where the residence time was varied from 1 to 5 h.
The yields of activation (based on the mass of biochar) are summarized in Figure S3. It was clear that higher temperatures (up to 950°C) and longer residence times would result in lower AC yield. Hence, more carbon and other elements were lost during steam activation when a higher temperature and longer residence time were applied.
Characterization Results. N 2 sorption−desorption isotherms and pore size distributions of steam AC are shown in Figure 1. BET surface areas, micropore volumes, and total pore volumes are shown in Table 1. Before activation, the surface area (S BET ) of the precursor biochar was 10 m 2 g −1 . The micropore volume (V M ) and total pore volume (V T ) were 0.004 and 0.016 cm 3 g −1 , respectively. After activation, S BET , V M , and V T of all ACs were one magnitude higher than biochar. The AC with the highest surface area (745 m 2 g −1 ) was activated at 950°C for 1 h, while the AC with the highest total pore volume (0.792 cm 3 g −1 ) was activated at 950°C for 5 h. The highest micropore volume of AC was 0.196 cm 3 g −1 , obtained at 850°C for 3 h. The activation temperature played a significant role in activation, greatly influencing the S BET , V M , and V T . The S BET and V T increased by increasing the activation temperature from 750 to 950°C, while V M reached a maximum at 850°C.
The pore size distribution of the produced AC is shown in Figure 1. The biochar and the AC produced at 750°C had a very small volume of mesopores and macropores. The pore size was enlarged when the activation temperature increased from 750 to 950°C and more mesopores and macropores were formed. Taking S BET as a criterion, the effects of residence time were then studied by varying the duration of activation from 1 to 5 h at 950°C. The results showed a decrease in BET surface areas and micropore volumes and an increase in their total pore volume when the residence time increased from 1 to 5 h. Figure 1b suggests that larger pores were formed at a longer activation time, which indicates that the effect of longer residence time is to remove more carbon by steaming and therefore enlarge the size of mesopores and macropores. 80 Due to the lower relative surface area of larger pores, turning micropores into larger pores will lower the surface area. This can explain the decrease in BET surface area when the residence time increased from 1 to 5 h.
The morphology of biochar and several selected ACs was explored via SEM. Figure 2 shows the SEM images taken under different magnifications. A decent amount of fibers and nonporous particles were observed in the biochar sample, which confirmed its low surface area and porosity. After activation at 750°C for 3 h, the particle showed more small pores that lead to the increase in surface area and pore volume. Activation at 950°C for 1 h produced highly porous AC. Energy-dispersive X-ray spectroscopy (EDX) was performed for the aforementioned samples with a magnification of 1000× to reveal the regional elements. Mineral elements such as Na, P, and Ca were found in biochar and AC (Table S2).
Elemental analysis was performed for all of the produced AC and the biochar to understand the effects of activation on the changes in N, C, H, and S contents. ICP-OES analysis was also performed to reveal the changes in Ca, Na, and P. Assuming that Ca, Na, and P exist in the activated carbons in the form of Ca 2+ , Na + , and PO 4 3− , the oxygen content in the corresponding minerals has been calculated. The elemental compositions of AC and biochar are displayed in Table 2. The washed and dried food waste had an initial carbon content of 48.8% and a relatively high hydrogen content of 7.4%. After pyrolysis, the carbon content of the generated biochar increased, while other elements (mainly oxygen) greatly dropped. AC produced at 950°C showed a dramatic reduction of carbon content after activation, while a longer residence time resulted in an even more severe reduction of carbon. By studying the carbon contents and the total pore volumes of the produced AC, it can be concluded that, generally, the higher total pore volume would result in lower carbon content. This is probably attributed to the steam reforming of char to form carbon monoxide and hydrogen. 81 Therefore, the higher total pore volume indicates that more carbon is lost during the activation, while the mineral contents are preserved. The ICP results shown in Table 2 provide the concentration of other elements. Before pyrolysis, there were a small amount of Ca, P, and Na in the food waste. Ca was probably from bones and/or milk, Na was from salt, and P might be attributed to meat, beans, and other ingredients. After pyrolysis, the total mass of solids decreased and the concentration of Ca, P, and Na in the char increased. After activation, because of the loss of carbon, the concentration of minerals increased even more. The greater the carbon loss during activation, the higher the Ca and P concentrations in the produced AC. Figure 3a exhibits the Xray diffraction patterns of biochar and AC. The broad band at 20°−30°, which reaches a maximum at 23°and 26°, and the peak located at 43°are assigned to carbon. Specifically, the peaks at 23°and 26°correspond to the (0 0 2) graphitic plane, 82,83 while the relatively small peak at 43°is related to the (1 0 0) graphite basal plane. 83,84 Almost no carbon peaks were identified for AC produced at 950°C, while sharp peaks at 28°, 31°, and 34.5°were detected, which can be attributed to tricalcium phosphate (Ca 3 (PO 4 ) 2 ). 85 The presence of tricalcium phosphate peaks should be attributed to the increased concentration of P and Ca after the activation Figure 3b shows the FTIR results of biochar and AC. The spectra are only displayed in the range of 2000 to 650 cm −1 because no significant bands were found at higher wavenumbers. The peaks at 1589, 1485, and 1406 cm −1 are assigned to CC bonds of aromatic rings. 88−91 The most notable wide band for all ACs and the biochar is observed in the range of 1350 to 900 cm −1 , which might indicate the existence of C−O stretching vibrations of alcohols, phenols, acids, ethers, and esters. 19,88,92 The intensity of this band increased after activation, suggesting the increased ratio of C− O bonds. However, this suggestion contradicts with the results of 13 C NMR and TPD, which revealed no significant increment of C−O bonds after activation (Figures S4 and S5). Thus, we may conclude that this band is more likely attributed to the presence of phosphorus: the peaks located at 1120 and 1050 cm −1 can be assigned to P + −O − in acid phosphate esters and to symmetrical vibration in a P−O−P chain. 93−95 The peak at 980 cm −1 for the AC produced at a temperature higher than 750°C is attributed to P−O−P stretching 31 due to the increased percentage of phosphorus after activation. The biochar and AC produced at 750°C do not show this peak, possibly because of the lower phosphorus concentration, as revealed by ICP-OES results. The peak at 876 cm −1 is ascribed to the characteristic peak of asymmetric CO 3 2− deformation. 86,87,96 This carbonate peak shows the highest intensity for the AC produced at 750°C and shows lower intensities for the AC produced at higher temperature. This might indicate the formation of CO 3 2− (CaCO 3 ) during activation at 750°C, which then decomposed at higher temperature, therefore causing the loss of carbon.
TPD studies were performed to qualify and quantify the oxygen functional groups of the biochar and the FWAC (sample FWAC-950C-1H), and the results are shown in the Supporting Information. TPD was performed for biochar and FWAC-950C-1H from 60 to 1000°C at a heating rate of 10°C /min under an Ar atmosphere. The CO and CO 2 peaks desorbed at various temperatures shown in Figure S5 correspond to the different oxygen functional groups shown in Table S3. For both samples, the CO 2 peaks appear at temperature ranges of 200−400°C and 650−750°C, which are attributed to carboxylic acids and lactones, respectively. 97−99 The CO signal continued to increase as the temperature increased. The apparent peak at a temperature higher than 850°C for both materials might be assigned to carbonyl/quinone groups. 97,98 The shoulder from 600 to 800°C is possibly attributed to phenol groups. 97−99 The concentration of the assigned functional groups are shown in Table S3. Apparently, the concentration of oxygen functional groups in FWAC is lower compared to the biochar. The 13 C NMR results shown in the Supporting Information ( Figure S4) indicated carbonyl/carboxyl groups, aromatic groups, and methoxyl groups, while the TPD results showed carboxylic acids, lactones, phenols, and carbonyl/quinone groups. Thus, 13 C NMR and TPD techniques can complement each other for the detection of oxygen groups. Both characterization results showed that there is no significant difference on the type of oxygen functional groups of biochar and activated carbons. However, the concentration of the oxygen groups in the activated carbons was lower than in biochar.
The Effects of pH. From the characterization results, FWAC-950C-1H (below abbreviated as FWAC) showed the highest surface area; hence, it was selected to test the potential of pollutants (phenol) removal in the aqueous phase. The phenol adsorption experiments were first performed at various pH values (1.94 to 5.39). The solution with pH = 5.39 was prepared using only phenol and DI water, while lower pH was achieved by adding 0.01 M HCl solution. The adsorption experiments were performed with an initial phenol concentration of 30 mg L −1 for 24 h, and the results are shown in Figure 4. As the pH increased from 1.94 to 5.30, the phenol removal increased from 34 to 77 mg g −1 . As the pH further increased to 5.39, the capacity slightly decreased. These results indicate that an environment with pH close to neutral is preferred for the removal of phenol from water using FWAC. A similar trend was also reported in the literature. 5,7,100 The Effects of Contact Time and Initial Phenol Concentration. To investigate the adsorption kinetics, the phenol adsorption experiments were conducted at various contact times (from 0.5 to 48 h) and initial phenol concentrations (10 to 50 mg L −1 ). All the experiments were performed without adding additional chemicals. The results are shown in Figure 5. Apparently, the adsorption capacity increased as the contact time increased, and after a period of time, the adsorption reached equilibrium. The adsorption was rapid at the beginning of the experiments, and after 2 h, it approached equilibrium. A longer time was required to reach equilibrium as the initial concentration of phenol increased. The maximum phenol removal capability of FWAC at equilibrium increased with higher initial phenol concentration. The highest phenol adsorption capacity was 134 mg g −1 , which was achieved with the initial phenol concentration of 50 mg L −1 .
To better describe the phenol uptake rate during adsorption, pseudo-first-order and pseudo-second-order kinetic models were employed ( Table 3). The models help predict the time required for reaching equilibrium and estimate the maximum adsorption capacity at equilibrium. The parameters were Table 4. The fitted pseudo-firstorder and pseudo-second-order kinetic models are plotted in Figures 5 and 6, respectively. Based on the values of coefficient of determination values and average relative errors, it can be concluded that the pseudo-second-order kinetic model describes better the adsorption of phenol on FWAC. The calculated q e numbers by the pseudo-second-order kinetic model were also closer to the experimental data. Several other studies reported that the pseudo-second-order model describes better the phenol adsorption by activated carbons. 1,6,59,101 The intraparticle diffusion model was applied to investigate the rate-controlling step of the adsorption process. The plot of q t vs t 1/2 is displayed in Figure 7 and the fitting parameters are shown in Table 4. From the figure, it is apparent that the overall profile did not follow a linear relationship; instead, two portions can be distinguished in the graph: a sharp first region and a low slope second region. As the profiles were not linear and the fitted models did not pass though the origin, it can be concluded that the intraparticle diffusion was not the sole ratecontrolling step of the adsorption. The dual-stage behavior might be attributed to the multistep process where the first step involved the transportation of phenol molecules from bulk solution to the external surface of the adsorbent and the second stage was dominated by intraparticle diffusion. 10,102−104 Adsorption Isotherm Models. To better understand the adsorption behavior of phenol adsorption on FWAC, Figure 5. Pseudo-first-order kinetic fitting of phenol adsorption at various initial concentrations. Figure 6. Pseudo-second-order kinetic fitting of phenol adsorption at various initial concentrations.
adsorption isotherm models were studied. The initial concentration of phenol was increased up to 500 mg L −1 to ensure the accuracy of model parameters. All the experiments lasted for 48 h to ensure the completion of adsorption. The Langmuir, Freundlich, Temkin, and D−R isotherm models were fitted to experimental data. The fitting parameters were derived via nonlinear regression and are shown in Table 4. Figure 8 shows the experimental data and fitted Langmuir and Freundlich isotherm models. The adsorption capacity at equilibrium increased as the initial concentration increased, which is consistent with the last section. With an initial concentration of 500 mg L −1 , the adsorption capacity reached a remarkable value of 568 mg g −1 . From Figure 8, it appears that the Langmuir model fits better the experimental data, which is supported by the higher R 2 value and the lower ARE. 58,59,105 The fitted curves of Temkin and D−R models showed clear deviation from experimental data ( Figure 9). The corresponding R 2 and ARE values also showed that these models were not suitable for describing phenol adsorption by FWAC. The results also indicate that the adsorption of phenol did not follow the pore-filling mechanism. Therefore, the Langmuir model might be the most appropriate model to describe the adsorption of phenol on FWAC. Lv et al., 59 Kumar and Jena, 1 and Yao et al. 106 have also found that the Langmuir model was the most suitable model to describe phenol adsorption on activated carbons. Comparison of AC Produced by Food Waste and Other Biomass Resources. The phenol removal capacity is affected by various factors (such as pH, initial concentration, dosage, and temperature). Thus, a direct comparison between the adsorption capacity of sorbents in this study and others in the literature is difficult. In Table 5, we present the adsorption capacity of our FWAC, as well as activated carbons derived from other resources, while giving experimental conditions. Although the experimental conditions vary, the table can provide a rough estimation on the performance of the sorbents. It is observed that the surface area of AC derived from different biomass can vary significantly. For example, ACs from rice husk show a much higher surface area than ACs from other resources. The FWAC in this study showed a moderately high surface area. The phenol adsorption experiments are commonly performed at room temperature (25−30°C) and initial phenol concentrations lower than 500 mg L −1 . The sorbent dosage is commonly between 1 and 2 g L −1 . This study uses less AC than most of references because the yield of FWAC was low and a limited amount was available for adsorption experiments.
The maximum adsorption capacity of FWAC in this study is very high compared to the literature; one potential reason is the high mesoporosity of the FWAC used for adsorption experiments that might enhance the transport of phenol within the AC particles and can be beneficial for adsorption. To verify the effects of mesoporosity of AC, the sample FWAC-850C-3H (Table 1), which showed a similar micropore volume but lower mesopore volume compared to FWAC-950C-1H, was tested for phenol adsorption. The experiments were conducted with an initial phenol concentration of 50 mg L −1 and lasted for 48 h. The adsorption capacity of FWAC-850C-3H was found to be 109.12 ± 4.58 mg g −1 , which was lower than FWAC-950C-1H (134.36 ± 7.19 mg g −1 ). The lower surface oxygen group concentration of FWAC-950C-1H as shown by TPD results, compared to the surface oxygen groups of other activated carbons reported in the literature, 74 water. 9,74,108 Li et al. used HNO 3 to increase the surface oxygen groups of activated carbons and found that the modified activated carbons showed lower phenol adsorption capability. 74 The possible reasons for the negative effect on adsorption capacity by surface oxygen groups are as follows: (1) surface oxygen groups reduce the π electron density and reduce the interactions and affinity between phenolic rings and the carbon surface, 9 and (2) water molecules tend to bind to surface oxygen groups by H-bond, reducing the accessibility of the adsorbate to the hydrophobic parts of the carbon surface. 109 The unique presence of minerals (such as Ca 3 (PO 4 ) 2 ) might also contribute to the high phenol adsorption capacity of AC. However, the role of minerals in the adsorption has been controversial in the literature. Li et al. concluded that the presence of ash in biochar inhibited the adsorption of bisphenol from aqueous solutions. 110 They proposed that the formation of minerals might block the inner pores of the sorbent and limit the available sorption sites. Wang et al. studied the effects of ash on aromatic compound adsorption and found that ash had an inhibitory effect on the sorption of aromatic compounds. 111 On the other hand, Tan et al. used biochar and demineralized biochar for dye adsorption and it revealed that the adsorption of dyes benefited from the presence of inorganic minerals. 112 Zhao et al. also investigated the role of minerals in biochars in bisphenol A adsorption. 113 They found that the biochar with higher mineral content showed higher bisphenol adsorption capacity. Thus, further studies need to be conducted to support the effect of minerals on adsorption.
■ CONCLUSIONS
In this study, activated carbons were prepared from food waste and evaluated for phenol adsorption in aqueous solutions. The ACs were produced via two-step activation: pyrolysis and activation. The pyrolysis and activation conditions were varied to study their effects on the properties of the produced AC. The results demonstrated that the activation temperature and residence time greatly affect the surface area and pore volume of the produced AC. Elemental, FTIR, and XRD analyses indicated that the ACs were mainly composed of carbon, while other elements (such as Ca, P, etc.) were also found in their structure due to the complex nature and origin of food waste. The FWAC with the highest surface area (745 m 2 g −1 ) was derived after physical (steam) activation at 950°C for 1 h of the corresponding biochar and was tested for phenol adsorption from aqueous solutions. The effects of pH, contact time, and initial concentration of phenol on the adsorption capacity of FWAC were investigated. A neutral solution was found to be beneficial for phenol adsorption. Pseudo-first-order and pseudo-second-order kinetic models and an intraparticle diffusion model were employed to study the kinetics of adsorption. Langmuir, Freundlich, Temkin, and Dubinin−Radushkevich isotherm models were applied to describe the adsorption behavior of FWAC. It was found that the adsorption of phenol on FWAC follows a pseudosecond-order kinetic model and the isotherm was more accurately represented by the Langmuir model. The pseudosecond-order kinetic model showed R 2 values >0.99 for most of initial concentrations studied in this work. The Langmuir model also showed an R 2 value of 0.9942 and suggested Q 0 as 760.76 mg g −1 . The results of intraparticle diffusion model fitting show that the intraparticle diffusion was not the sole rate-controlling step of the adsorption. The highest phenol adsorption capacity was 568 mg g −1 , achieved with an initial phenol concentration of 500 mg L −1 . The proposed FWAC showed a remarkable potential for phenol removal, comparable with other biomass-derived activated carbons reported in the literature. However, we notice that the yield of high-surfacearea activated carbon from food waste was low (around 10 wt %), which means that a balance between yield and surface area has to be made when activated carbons from food waste are prepared. Overall, the results of this study demonstrate that the utilization of food waste to produce renewable and sustainable AC can be an excellent food waste management solution.
■ ASSOCIATED CONTENT
Product yields of pyrolysis as a function of temperature, product yields of pyrolysis as a function of residence time, biochar-to-AC yield under different activation temperatures and residence times, elemental analysis of biochars derived under different conditions, EDX results for biochar and AC, 13 | 2021-04-10T05:13:34.448Z | 2021-03-25T00:00:00.000 | {
"year": 2021,
"sha1": "ac7f176c3e7a0037d0af7422907ec17d6bc03ae1",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.0c06029",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac7f176c3e7a0037d0af7422907ec17d6bc03ae1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259268988 | pes2o/s2orc | v3-fos-license | Tumor-Infiltrating CD8-Positive T-Cells Associated with MMR and p53 Protein Expression Can Stratify Endometrial Carcinoma for Prognosis
Background: Inspired by the molecular classification of endometrial carcinoma (EC) proposed by The Cancer Genome Atlas Research Network (TCGA), we investigated tumor-infiltrating CD8-positive T-cell as well as DNA mismatch repair (MMR) protein and p53 protein expression, and we developed a new classification system for ECs to predict patients’ prognosis using immunohistochemical methods. Methods: The study included 128 patients with ECs who underwent surgery. Paraffin-embedded tissue sections of the tumor were stained using antibodies against MMR protein, p53, and CD8. Cases were stratified into four classes by a sequential algorithm. An immunohistochemical classification system for ECs (ICEC) was created, including HCD8, MMR-D, LCD8, and p53 LCD8. Results: In ICEC, 16 cases (12.5%), 27 cases (21.09%), 67 cases (52.34%), and 18 cases (14.06%) belonged to HCD8, MMR-D, LCD8, and p53 LCD8, respectively. ICEC did not show any correlation with clinical stage, lymphovascular space invasion, or lymph node metastasis. However, the p53 LCD8 class contained a significantly higher proportion of G3 ECs and serous carcinoma (p < 0.0001). ICEC showed prognostic significance in overall survival (OS) (p < 0.0001) and disease-free survival (DFS) (p < 0.0001). The class of p53 LCD8 showed the worst prognosis among the classes. Conclusions: ICEC classification is useful in predicting the prognosis of ECs.
Introduction
Endometrial carcinoma (EC) is the second most common gynecological carcinoma worldwide after cervical cancer, and it is the most common gynecological carcinoma in Japan [1]. Traditionally, endometrial carcinoma is divided into two types, type I and type II, as introduced by Bokhman [2] based on epidemiological and clinicopathological data. Type I tumors are low-grade endometrioid carcinomas with endometrial hyperplasia in the background and have a good prognosis. They are associated with excess estrogen, obesity, hypertension, hypercholesterolemia, glucose intolerance, and diabetes mellitus [2,3]. Type II tumors are high-grade tumors, including serous carcinoma and carcinosarcoma with an atrophic endometrium in the background, with poor prognosis, and they are not associated with excess estrogen and metabolic disturbances [2,3]. This dualistic model of Bokhman is useful in understanding endometrial carcinoma and managing patients with endometrial carcinoma [4]. However, this model is imperfect because there are carcinomas with ambiguous features that are difficult to classify. These include carcinomas with solid endometrioid architecture, glandular endometrioid architecture with a high nuclear grade, clear cells, and mixed epithelial components [5,6].
In 2013, The Cancer Genome Atlas Research Network (TCGA) proposed a comprehensive genomic and transcriptomic classification system for endometrial carcinoma [7]. The TCGA classification system is composed of four classes: (1) POLE (ultramutated) (POLEmut); (2) microsatellite instability (MSI) (hypermutated); (3) copy number low (endometrioid) (CN-low); and (4) copy number high (serous-like) (CN-high). This classification system is also well correlated with prognosis [7,8]. Although the TCGA classification system is useful for clinical purposes, it requires frozen material and molecular analysis. Based on the TCGA classification system, a more practical classification system for endometrial carcinoma using immunohistochemistry has been proposed by two groups [9][10][11][12]. Although these classification systems appear to be useful, the molecular analysis of POLE mutations, which remains a challenge in community hospital laboratories in Japan, is necessary in their models.
Inspired by the TCGA classification and information about the clinical significance of TILs in EC, we aimed to develop a new prognostic classification system for EC using immunohistochemistry (ICEC), which can be easily utilized in the laboratories of community hospitals in Japan, where genomic mutation analysis is still uncommon.
Cases
Cases with a pathological diagnosis of EC were sought in the pathology database of Hakodate Municipal Hospital between 2009 and 2018. Formalin-fixed paraffin-embedded tissue (FFPE) blocks from patients who underwent hysterectomy were used. The pathological diagnosis of the specimens was reevaluated, and the eligibility of the specimens was determined by one of the authors (SM). In total, 136 cases were selected. As we aimed to determine the prognostic relevance of ECs, we selected the cases according to the TCGA study criteria [7]. In this regard, those with histology including clear cell carcinoma (n = 3), neuroendocrine carcinoma (n = 3), and carcinosarcoma (n = 2) were excluded from the study. One hundred and twenty-eight cases with endometrioid histology and serous histology were eligible for the study. The clinical records and follow-up data of the patients were obtained from the clinical database of Hakodate Municipal Hospital. All the patients had follow-up data. Clinicopathological characteristics are shown in Table 1. All cases were treated according to the standard clinical guidelines. The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the institutional review board of Hakodate Municipal Hospital (IRB admission No.: 2021-12), and permission was obtained from all patients.
Immunohistochemistry
In each case, a representative FFPE block was selected for study. Three-µm FFPE specimens were cut and stained using Bond Max (Leica Biosystems K.K. Tokyo, Japan) or Bond-III (Leica Biosystems). Six monoclonal antibodies were used for the study, including p53 (DO-7, mouse monoclonal, ready to use, heat, Nichirei, Tokyo, Japan), antibodies for DNA mismatch repair (MMR) protein including MLH1 (ES05, mouse monoclonal, ×50, heat, Leica Biosystems), MSH2 (79H11, mouse monoclonal, ×150, heat, Leica Biosystems), MSH6 (PU29, mouse monoclonal, ×100, heat, Leica Biosystems), and PMS2 (M0R4G, mouse monoclonal, ×100, heat, Leica Biosystems), along with CD8 (C8/144B, mouse monoclonal, ready to use, heat, Nichirei). Immunohistochemical positivity for p53 was determined when 70% or more tumor cells were strongly stained in the nuclei or completely absent in staining for p53. Cases with a subclonal pattern of p53 expression were also determined to be positive according to our criteria, as stated by Singh et al. in their report [21]. MMR protein expression was considered aberrant if staining was entirely lost. If a subclonal pattern was observed, the patient was also determined to be MMR-deficient according to the results reported by Stelloo et al. [22]. For CD8-positive T-cell (TC) counting, 4 randomly selected areas were used for analysis. Digital images were obtained using a NikonDS-Fi3 digital camera and the NIS-Elements Lite software Ver. 1.00 with a 20× objective lens. Four of the 0.33 mm 2 areas, amounting to 1.35 mm 2 in total, were used to count CD8-positive TCs. The intra-tumoral and peri-tumoral infiltration of CD8-positive TCs was manually counted using counting software ver. 2.71 (katikati2: GTSOFT) by one of the authors (SM). Intra-tumoral CD8-positive TC infiltration with up to 400 cells was designated as CD8 TIL-low. Intra-tumoral CD8-positive TC infiltration with 400 cells or more in total was designated as CD8 TIL-high. Accordingly, peri-tumoral CD8 infiltration up to 400 cells was designated as peri CD8-low, and peri-tumoral CD8 infiltration with 400 cells or more was designated as peri CD8-high.
Immunohistochemical Classification
The algorithm for the immunohistochemical classification of endometrial carcinoma (ICEC) is shown in Figure 1. First, the MMR protein deficiency status was checked. Cases with MMR protein deficiency were classified as MMR-D. Next, the intra-tumoral CD8positive TC count was evaluated. Cases with CD8 TIL-high were classified into the HCD8 class, regardless of the p53 staining results. After the selection of the CD8 TIL status, cases considered p53-positive and CD8 TIL-low were classified into the p53 LCD8 class. The remaining cases that were MMR-proficient, p53-negative, and CD8 TIL-low were classified as into the LCD8 class.
Statistics
Statistical analysis was performed using the Statcel Ver. 3 software (OMS, Japan). The association between the ICEC class and age was tested using the Kruskal-Wallis test. Associations between the ICEC class and clinical stage, histological grade, histological subtype, lymphovascular space invasion (LVSI), and lymph node metastasis were calculated using the chi-square for independence test with an m × n contingency table. The correlation between intra-tumoral CD8-positive TC and peri-tumoral CD8-positive TC infiltration was calculated using the Pearson correlation coefficient. The difference in the number of infiltrated CD8-positive TCs between intra-tumoral and peri-tumoral tissue was tested by Welch's t test. Survival curves were calculated using the Kaplan-Meier method with the log-rank test.
Statistics
Statistical analysis was performed using the Statcel Ver. 3 software (OMS, Japan). The association between the ICEC class and age was tested using the Kruskal-Wallis test. Associations between the ICEC class and clinical stage, histological grade, histological subtype, lymphovascular space invasion (LVSI), and lymph node metastasis were calculated using the chi-square for independence test with an m × n contingency table. The correlation between intra-tumoral CD8-positive TC and peri-tumoral CD8-positive TC infiltration was calculated using the Pearson correlation coefficient. The difference in the number of infiltrated CD8-positive TCs between intra-tumoral and peri-tumoral tissue was tested by Welch's t test. Survival curves were calculated using the Kaplan-Meier method with the log-rank test.
Clinicopathological Characteristics of ICEC
Associations between the classes determined by ICEC and clinicopathological characteristics are shown in Table 2. The mean age of the patients with p53 LCD3 was significantly higher among the classes (p = 0.001). The classes of ICEC were not significantly associated with the clinical stages. Histological analysis showed that a significantly higher number of serous carcinomas were observed in the p53 LCD8 class (p < 0.0001). No significant association was observed between the ICEC classes and LVSI or lymph node metastasis. Diagnostics 2023, 13, x FOR PEER REVIEW 5 of 13
Clinicopathological Characteristics of ICEC
Associations between the classes determined by ICEC and clinicopathological characteristics are shown in Table 2. The mean age of the patients with p53 LCD3 was significantly higher among the classes (p = 0.001). The classes of ICEC were not significantly associated with the clinical stages. Histological analysis showed that a significantly higher number of serous carcinomas were observed in the p53 LCD8 class (p < 0.0001). No significant association was observed between the ICEC classes and LVSI or lymph node metastasis.
Clinicopathological Characteristics of ICEC
Associations between the classes determined by ICEC and clinicopathological characteristics are shown in Table 2. The mean age of the patients with p53 LCD3 was significantly higher among the classes (p = 0.001). The classes of ICEC were not significantly associated with the clinical stages. Histological analysis showed that a significantly higher number of serous carcinomas were observed in the p53 LCD8 class (p < 0.0001). No significant association was observed between the ICEC classes and LVSI or lymph node metastasis.
Intra-Tumoral and Peri-Tumoral CD8-Positive TCs
The numbers of intra-tumoral and peri-tumoral CD8-positive TCs are shown in Table 3. These were compared between ICEC classes. The number of CD8-positive TCs in the HCD8 class was the highest among the classes of ICEC. The number of CD8-positive TCs in the HCD8 class was also significantly higher than that in the MMR-D class (p = 0.025).
Clinical Outcomes and ICEC
Clinical outcomes were investigated using multiple parameters (Table 4). In the association of the classes defined by ICEC, the class of p53 LCD8 showed the worst prognosis in terms of overall survival (OS) (p < 0.0001) and disease-free survival (DFS) (p < 0.0001) (Figure 7). In contrast, the HCD8 class showed excellent prognosis in OS and DFS. The International Federation of Gynecology and Obstetrics (FIGO) grade showed significant prognostic significance in OS (p < 0.0001) and DFS (p < 0.0001). The FIGO stage also showed prognostic significance in OS (p < 0.0001) and DFS (p = 0.0002).
Discussion
Since the introduction of TCGA's comprehensive genomic classification system for endometrial carcinoma, many studies have confirmed the usefulness of this molecular classification system in clinical practice [7,8,19,23,24]. This molecular classification system differs from Bohkman's dualistic classification system. In contrast to the incomplete clinicopathological classification of Bohkman, the molecular analysis of TCGA shows the possibility to classify endometrial carcinoma in terms of clinical usefulness, because some of the tumors are difficult to classify histologically when predicting prognosis [5,25]. While histological cell types are important prognostic parameters of endometrial carcinoma, the reproducibility of histological cell types is relatively low and consensus histological diagnoses require many immunohistochemical markers [6]. McConechy et al. proposed to refine the classification of endometrial carcinomas using the mutation profiles of nine genes, namely ARID1A, PPP2R1A, PTEN, PIK3CA, KRAS, CTNNB1, TP53, BRAF, and PPP2R5C [25]. They stated that the molecular profile of the tumor was useful as an adjunct to morphological classification and could serve as an aid in the classification of ECs. However, they still intended to classify ECs via two-tiered classification [25]. The TCGA classification system proposed a new means of classification to determine the biological characteristics and behavior of the tumor as indicated by the patient's prognosis, to provide a more appropriate choice of treatment modalities for each patient. The four distinct classes of the TCGA classification system have particular molecular profiles and prognostic value. Traditional histological classification cannot exactly predict each class of the TCGA classification system. In recent studies of the TCGA classification system, four molecular and immunohistochemical classes were found to contain a variety Tumors with intra-tumoral CD8 TIL-high showed a better prognosis in OS (p = 0.04) and DFS (p = 0.03). However, peri-tumoral CD8 TIL did not show a significant prognostic difference.
LVSI status showed a poorer prognosis in OS (p = 0.01) and DFS (p = 0.001). In the cases with an available lymph node metastatic status (n = 112), those with lymph node metastasis exhibited worse prognosis in OS (p = 0.0004) and DFS (p = 0.008).
Discussion
Since the introduction of TCGA's comprehensive genomic classification system for endometrial carcinoma, many studies have confirmed the usefulness of this molecular classification system in clinical practice [7,8,19,23,24]. This molecular classification system differs from Bohkman's dualistic classification system. In contrast to the incomplete clinicopathological classification of Bohkman, the molecular analysis of TCGA shows the possibility to classify endometrial carcinoma in terms of clinical usefulness, because some of the tumors are difficult to classify histologically when predicting prognosis [5,25]. While histological cell types are important prognostic parameters of endometrial carcinoma, the reproducibility of histological cell types is relatively low and consensus histological diagnoses require many immunohistochemical markers [6]. McConechy et al. proposed to refine the classification of endometrial carcinomas using the mutation profiles of nine genes, namely ARID1A, PPP2R1A, PTEN, PIK3CA, KRAS, CTNNB1, TP53, BRAF, and PPP2R5C [25]. They stated that the molecular profile of the tumor was useful as an adjunct to morphological classification and could serve as an aid in the classification of ECs. However, they still intended to classify ECs via two-tiered classification [25]. The TCGA classification system proposed a new means of classification to determine the biological characteristics and behavior of the tumor as indicated by the patient's prognosis, to provide a more appropriate choice of treatment modalities for each patient. The four distinct classes of the TCGA classification system have particular molecular profiles and prognostic value. Traditional histological classification cannot exactly predict each class of the TCGA classification system. In recent studies of the TCGA classification system, four molecular and immunohistochemical classes were found to contain a variety of histological types [26,27]. In our study, each ICEC class also had a variety of histological grades, with a significant difference between the classes (Table 2). In particular, the p53 LCD8 class, which showed the worst prognosis, had a substantial number of cases with G3 endometrioid carcinoma and serous carcinoma, although even this class had a substantial number of cases with G1 and G2 endometrioid carcinoma. In contrast, one quarter of the HCD8 class, which exhibited excellent prognosis, showed G3 endometrioid carcinoma. Therefore, the histological classification and grade alone cannot predict ICEC classification, as observed in the TCGA classification system.
POLEmut and MSI tumors of TCGA classification are reported to be highly associated with an increased number of tumor-infiltrating TCs [16][17][18][19][20]28]. A mutation of POLE, DNA polymerase ε, which has polymerase activity and 3 -5 exonuclease activity, along with MMR deficiency, will cause a high tumor mutation burden, resulting in the accumulation of mutated genes in cells and the production of tumor neoantigens. It has been reported that an increased number of tumor-infiltrating TCs is associated with increased neoantigen production and the expression of programmed death receptor-1 (PD-1) and its ligand, PD-L1, a target of immune checkpoint blockade, in POLEmut and MSI tumors [16,18,20,28]. In our study, the number of intra-tumoral CD8-positive TCs in the HCD8 and MMR-D classes was significantly higher than that in the LCD8 and p53 LCD8 classes. Based on these observations, the HCD8 class has potential to indicate the number of POLEmut tumors in the TCGA classification system, although POLE mutation analysis was not conducted in this study.
In the TCGA classification system, the study materials were endometrioid, serous, and mixed carcinomas [7]. This study did not include clear cell carcinoma. In our study, only endometrioid carcinoma and serous carcinoma were included, while other histologies, including clear cell carcinoma, were excluded according to the TCGA classification.
A higher number of TILs, especially tumor-infiltrating CD8-positive TCs, is known to be related to better prognosis in endometrial carcinoma [13,14]. In our study, tumors with CD8 TIL-high status showed significantly better prognosis than those with CD8 TIL-low status in OS (p = 0.04) and DFS (p = 0.03), as shown in previous studies (Table 4) [13,14].
MSI was evaluated by immunohistochemistry as a surrogate marker for molecular analysis. It was reported that MMR deficiency was concordant with MSI in 94% of cases [22]. Although the assessment of MMR protein expression by immunohistochemistry is difficult, one study showed interobserver agreement of 92% [29]. Stelloo et al. reported that they observed the subclonal expression of MMR proteins in <3% of cases. They stated that the subclonal expression of MMR proteins should be classified as MMR deficiency because most cases with subclonal expression showed MSI-H [22]. In our study, the subclonal expression of MLH1 was observed in 2 out of 27 cases of MMR-D tumors (7%). Because these tumors were also deficient in PMS2 expression, they were classified as MMR-D tumors.
Immunohistochemistry for p53 is well known to represent TP53 mutation. Singh et al. reported that the immunohistochemical evaluation of p53 was concordant with TP53 mutation in 92.3% of cases [21]. They observed that the subclonal expression of p53 belonging to the POLEmut and MMR-deficient classes did not have TP53 mutation, while those belonging to the POLE wild type and MMR-proficient classes showed TP53 mutation in four of five cases (80%) [21]. In our study, subclonal p53 expression was observed in four cases (two MMR-D and two LCD8). Because mutation analysis for TP53 was not available in our study, the expression of p53 in these cases was classified according to our study criteria, i.e., strongly positive when ≥70% or complete absence. Tumors with aberrant p53 expression were also found in the POLEmut class of the TCGA classification system [7,30]. Considering that the POLEmut class contains tumors with a high number of intra-tumor lymphocytes and aberrant p53 expression, the tumors with intra-tumoral TIL-high with aberrant p53 expression observed in this study might reflect tumors of the POLEmut class in the TCGA classification system; however, this association could not be proven due to the lack of mutation analysis. Instead, we developed the new classification system employing the immunohistochemical analysis of intra-tumoral CD8-positive TC infiltration, MMR status, and p53 status. In our study, CD8-positive TIL-high tumors showed a better prognosis than CD8-positive TIL-low tumors, even in tumors with aberrant p53 expression with OS (p = 0.07) and DFS (p = 0.03) (Figure 8). Our observations showed the prognostic importance of CD8-positive TCs, playing an antitumor immunological protective role irrespective of the aberrant p53 expression of tumors. Notably, this was proven by the fact that the HCD8 class showed excellent prognosis.
The practical application of TCGA classification using paraffin-embedded tissue was proposed by two groups [9][10][11][12]. Talhouk et al. proposed the ProMisE system using immunohistochemistry for MMR and p53 protein expression, along with molecular analysis of POLE mutations [9,10]. They determined four groups of POLE EDM, MMR IHC abn, p53 wt, and p53 abn for POLEmut, MSI hypermutated, CN-low, and CN-high in the TCGA classification system, respectively. In their study, .4%, 20-29%, 43.6-45%, and 18-27% of cases were allocated to POLE EDM, MMR IHC abn, p53 wt, and p53 abn, respectively [9,10]. In our study, 12.5%, 21.09%, 52.34%, and 14.06% of cases were allocated to HCD8, MMR-D, LCD8, and p53 LCD8 ( Table 2). This is very similar to their study. In this regard, our classification system might be useful to determine the biological behavior of tumors. Both the ProMisE system and the PORTEC trial showed the practical usefulness of molecular and immunohistochemical classifiers in managing patients with endometrial cancer. de Biase et al. compared the European Society of Gynecological Oncology/European Society for Radiotherapy and Oncology/European Society of Pathology endometrial cancer risk classification system (ESGO/ESTRO/ESP2016) with immuno-molecular analysis incorporating ESGO/ESTRO/ESP2020, to evaluate the prognostic impact in endometrial cancer. They found that the new classification system, including the analysis of POLE mutation, MMR, and p53 status, was more suitable to stratify ECs for prognosis [24]. However, mutation analysis of POLE is the only obstacle preventing these classifiers from becoming standard surrogates for TCGA classification worldwide, because there are many countries where molecular analysis is still challenging to apply. Our ICEC solely uses immunohistochemical analysis, which can be applied even in the laboratories of community hospitals in Japan.
Our study showed the possibility that immunohistochemical analysis can stratify ECs for prognosis without molecular analysis. In particular, we could separate the population with good prognosis even in p53-positive ECs. Interestingly, the study by Meng et al. reported that POLE mutant tumors showed significantly better prognosis in grade 3 endometrioid carcinoma [31]. Their study showed the importance of separating tumors with relatively benign behavior from tumors that had previously been classified as a high-risk group according to histological and immunohistochemical parameters. Our data are in line with the study of Meng et al. with respect to finding relatively benign ECs in tumors that were previously classified as high-risk ECs. Our developed ICEC can provide useful information for clinicians who treat patients with ECs. In our ICEC classification, ECs of HCD8 are considered as a benign group and ECs with p53 LCD8 as an aggressive group. ECs with MMR-D and LCD8 are considered as an intermediate behavior group.
However, it would be premature to draw conclusions because of the relatively small number of samples and shorter follow-up periods in the study. Therefore, further investigation is necessary to confirm our data. We hope that our immunohistochemical classifier will aid in the clinical management of EC in the future.
In conclusion, we showed the clinically relevant classification of endometrial carcinoma when the analysis of MMR and p53 protein expression was combined with the assessment of tumor-infiltrating CD8-positive TCs, inspired by the concepts of TCGA. We created a prognosis-associated new classification system only using immunohistochemistry and showed distinct class stratification with prognostic significance. Practically, the ICEC must be applied to biopsy specimens to provide clinicians with useful information before they choose therapeutic modalities. Further investigation is necessary to prove the usefulness of our new classifier in treating patients with endometrial carcinoma. | 2023-06-29T05:12:28.813Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "b0a7efb3f79f8bb6f9cdb66cb4034475d7e986db",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b0a7efb3f79f8bb6f9cdb66cb4034475d7e986db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257890444 | pes2o/s2orc | v3-fos-license | One-Step Synthesis of a Binder-Free, Stable, and High-Performance Electrode; Cu-O|Cu3P Heterostructure for the Electrocatalytic Methanol Oxidation Reaction (MOR)
Although direct methanol fuel cells (DMFCs) have been spotlighted in the past decade, their commercialization has been hampered by the poor efficiency of the methanol oxidation reaction (MOR) due to the unsatisfactory performance of currently available electrocatalysts. Herein, we developed a binder-free, copper-based, self-supported electrode consisting of a heterostructure of Cu3P and mixed copper oxides, i.e., cuprous–cupric oxide (Cu-O), as a high-performance catalyst for the electro-oxidation of methanol. We synthesized a self-supported electrode composed of Cu-O|Cu3P using a two-furnace atmospheric pressure–chemical vapor deposition (AP-CVD) process. High-resolution transmission electron microscopy analysis revealed the formation of 3D nanocrystals with defects and pores. Cu-O|Cu3P outperformed the MOR activity of individual Cu3P and Cu-O owing to the synergistic interaction between them. Cu3P|Cu-O exhibited a highest anodic current density of 232.5 mAcm−2 at the low potential of 0.65 V vs. Hg/HgO, which is impressive and superior to the electrocatalytic activity of its individual counterparts. The formation of defects, 3D morphology, and the synergistic effect between Cu3P and Cu-O play a crucial role in facilitating the electron transport between electrode and electrolyte to obtain the optimal MOR activity. Cu-O|Cu3P shows outstanding MOR stability for about 3600 s with 100% retention of the current density, which proves its robustness alongside CO intermediate.
Introduction
The constant rise in global energy demand with the simultaneous depletion of the existing fossil fuels of the earth crust are driving the research of new energy sources [1][2][3][4]. To resolve such issues of climate change and the energy crisis, direct methanol fuel cells (DMFCs) have met tremendous interest owing to their potential application in both mobile and stationary devices [5][6][7]. DMFCs have several advantages, including being simple to operate and the fact that methanol is nontoxic and can be stored and transported very easily. More importantly, they exhibit a capacity than can deliver high power density [8,9]. Usually, the electro-oxidation steps of methanol follow several pathways that are coupled with protons and electrons. The methanol oxidation reaction (MOR) that occurs at the anode converts the chemical energy present within the methanol to electricity: CH 3 OH + 6OH − → CO 2 + 5H 2 O + 6e − (At pH = 14, E 0 = −0.81 V NHE) [10]. The major problems of this reaction are the slow kinetics of MOR (multielectron transfer process) and the poisoning of the catalyst due to the formation of C-O at the catalytic active sites, thus hampering the practical use of DMFCs [11][12][13][14]. Operating DMFCs in alkaline media can be preferred over acidic media owing to the beneficial effects of obtaining higher efficiency, the utilization of non-noble and cheaper metals, and a lower rate of harmful effects exhibited by intermediates on the catalytic active sites of catalysts [15,16]. It is found that the successful accomplishment of the direct methanol-based fuel cell technology mainly relies on two key parts, such as the membrane and the electrocatalysts (anodes). DMFCs face two major challenges, namely (a) the methanol crossover effect, which can be sorted out by developing efficient membranes; and (b) slow MOR kinetics occurring at the anode. Thus in this scenario, efficient and high-performance anode catalysts could facilitate the reaction kinetics and improve MOR activity [17]. These two major issues not only frustrate to the cathode output, but also reduce the fuel efficiency. Notably, the anodic MOR occurs in alkaline media comparatively faster than it does in acidic media. Moreover, the kinetics of cathodic ORR is known to be more appealing in alkaline media than in acidic media [18].
Noble metals such as Pt, Pd, and their alloys provide outstanding MOR performance in both acidic and basic electrolytic conditions [19][20][21][22][23][24][25][26]. Platinum is a traditional metal that is used for DMFCs as an anode catalyst, and outperforms other studied metals in terms of activity and stability. However, the formation of intermediates such as carbon monoxide (CO) while oxidizing the methanol blocks the active site of catalysts, and hence lowers the methanol electro-oxidation reaction kinetics through the catalyst poisoning, which is a roadblock of the DMFC industrialization [27]. Among the Pt-based binary alloys, Pt-Ru alloy shows promising behavior and is considered the state-of-the-art anode catalyst for DMFCs [28,29]. Nevertheless, Pt and their alloy catalysts are highly expensive, which thus restricts their industrial feasibility in DMFCs [30,31]. Compared to Pt, the Pd-based catalysts have certain advantages, such as the abundance of Pd being much higher (0.015 ppm) than Pt (0.005 ppm) in the Earth's crust, which can reduce the catalyst price. Moreover, Pd-based catalysts exhibit good corrosion stability, and operate well even in alkaline conditions, which makes the commercialization of alkaline DMFCs possible [32].
In contrast to noble metal catalysts, non-noble metals are abundant, inexpensive substitutes to their platinum group metal counterparts for the MOR applications. Furthermore, nonprecious-metal-based catalysts are the best choices to operate in alkaline DMFCs. In this regard, first-row transition metal-based oxides and hydroxides are possible substitutes owing to their exciting properties and electrocatalytic activity [33][34][35][36]. In recent decades, transition metals, comprising of Ni, Co, and Cu, as well as their compounds, have gained tremendous research attention [37]. Among the other first-row d-block metals, copperbased systems are promising owing to their high catalytic performance, low price, and nontoxicity [11,[38][39][40][41].
Moreover, heterostructures can exhibit much higher electrocatalytic activity compared to their single-component counterparts [42]. In recent years, studies have found that multicomponent systems are capable of lowering the binding strength between metal and intermediates, thus delivering a surplus of hydroxyl ions, which is useful for the oxidation of intermediates [43]. The multicomponent heterostructures usually act in balancing the adsorption and desorption of formed chemical intermediates. Mostly, electrocatalytic reactions occur at the interfaces of the electrode and electrolyte, hence the electronic environment and the surface properties of a catalyst decide their electrocatalytic performance. In this regard, integrating two different components by means of forming heterointerfaces could boost electrocatalytic activity [44,45]. The catalysts are usually immobilized on current collectors with the help of polymeric binders prior to the testing. However, the polymer binders enhance the solution resistance, sometimes causing the blockage of the catalytic active sites, leading to reduced catalytic performance [46]. In this aspect, the direct growth of materials over conductive support has gained support in recent years, as self-supported electrodes do not require binders (which increase the electrical resistance and affect long-term stability) and deliver a very good output with low loading of materials [32,47]. Next-generation DMFC anode catalysts must exhibit high performance, good durability character, and cost effectiveness.
Inspired by the concept of interface engineering, we successfully engineered Cu-O|Cu 3 P heterostructures as "self-supported" electrodes using a two-zone atmospheric pressure chemical vapor deposition (AP-CVD) furnace. The binder-free Cu-O|Cu 3 P anode unveiled
Experimental Section
Commercial copper foils (99.9% purity, 0.254 mm thickness) of about 1 × 3 cm 2 were used as the starting material. The copper pieces were cleaned with 1 M HCl for 20 min, followed by rinsing with deionized water and drying with N 2 -gas flow. We synthesized Cu-O|Cu 3 P using a two-furnace atmospheric pressure chemical vapor deposition (AP-CVD) furnace. A fused silica (quartz) tube with an internal diameter of 22 mm was inserted in the two furnaces in series. The temperatures of the furnace were monitored using built-in thermocouples. The temperatures were set at 350 • C for both the upstream (Z 1 ) and downstream furnaces (Z 2 ). In the first step, we oxidized the Cu under atmospheric pressure (kept at Z 2 ) with a temperature ramp of 5 • C min −1 at 350 • C for an hour to obtain Cu-O. In the second step, after oxidation, we purged the system with He gas 100 [sccm] (99.9999% gas technologies) for 30 min. Finally, we pushed the alumina boat containing NaH 2 PO 2 ·H 2 O powder (100 mg, Glentham) into the hot upstream furnace (Z 1 ) under He flows (for 60 min., 100 sccm) followed by a slow natural cooling of the furnace (depicted in Scheme 1). Similarly, we carried out the oxidation process of Cu foil at 500 • C with the increase in temperature at 5 • C min −1 and held for 1 h, and in the next step, we placed a boat containing 100 mg of NaH 2 PO 2 ·H 2 O powder in the in the upstream zone (Z 1 ) of the preheated furnace (at 350 • C) under He flows. Then, the furnace was cooled naturally to room temperature. Inspired by the concept of interface engineering, we successfully engineered Cu-O|Cu3P heterostructures as "self-supported" electrodes using a two-zone atmospheric pressure chemical vapor deposition (AP-CVD) furnace. The binder-free Cu-O|Cu3P anode unveiled promising MOR activity and stability in 1M CH3OH containing 1M KOH. It also showed good conductivity and high tolerance against CO, the intermediate forms during MOR.
Experimental Section
Commercial copper foils (99.9% purity, 0.254 mm thickness) of about 1 × 3 cm 2 were used as the starting material. The copper pieces were cleaned with 1M HCl for 20 min, followed by rinsing with deionized water and drying with N2-gas flow. We synthesized Cu-O|Cu3P using a two-furnace atmospheric pressure chemical vapor deposition (AP-CVD) furnace. A fused silica (quartz) tube with an internal diameter of 22 mm was inserted in the two furnaces in series. The temperatures of the furnace were monitored using built-in thermocouples. The temperatures were set at 350 °C for both the upstream (Z1) and downstream furnaces (Z2). In the first step, we oxidized the Cu under atmospheric pressure (kept at Z2) with a temperature ramp of 5 °C min −1 at 350 °C for an hour to obtain Cu-O. In the second step, after oxidation, we purged the system with He gas 100 [sccm] (99.9999% gas technologies) for 30 min. Finally, we pushed the alumina boat containing NaH2PO2·H2O powder (100 mg, Glentham) into the hot upstream furnace (Z1) under He flows (for 60 min., 100 sccm) followed by a slow natural cooling of the furnace (depicted in Scheme 1). Similarly, we carried out the oxidation process of Cu foil at 500 °C with the increase in temperature at 5 °C min −1 and held for 1 h, and in the next step, we placed a boat containing 100 mg of NaH2PO2·H2O powder in the in the upstream zone (Z1) of the preheated furnace (at 350 °C) under He flows. Then, the furnace was cooled naturally to room temperature.
Materials Characterization
The synthesized materials on Cu foil were characterized using X-ray diffraction (XRD) (Bruker D8 (Billerica, MA, USA) advance with Cu Kα radiation (λ = 1.5418 Å) with an operating voltage of 40 kV). The morphological and microstructural analyses were performed using a high-resolution scanning electron microscope (HR-SEM) and a high-resolution transmission electron microscope (HR-TEM). For the HR-SEM analysis, FEI, Magellan 400L (Hillsboro, OR, USA) was utilized to obtain the morphology and topography. HR-TEM analysis was carried in (JEOL JEM-2100, Tokyo, Japan). Before the TEM analysis, we dispersed 0.5 mg of the sample (scratched from Cu foil) in 15 mL absolute ethanol, and then we drop-casted a drop of solution onto Ni grid and kept it for drying at ambient conditions. The Kratos axis HS spectrometer (Kratos, San Diego, CA, USA), combined with monochromatic (Al) and dual (Mg/Al) source, was utilized for the XPS analysis. We attached a pinch of sample to the sticky carbon tape for the measurement.
Electrochemical Characterization
Electrochemical experiments were performed in a standard three-electrode system using a Biologic VSP potentiostat. Pt and Hg/HgO (filled with 1M NaOH) served as counter and reference electrodes, respectively. The nanostructured catalysts grown over Cu Scheme 1. The drawing demonstrates the synthesis method of Cu-O|Cu 3 P.
Materials Characterization
The synthesized materials on Cu foil were characterized using X-ray diffraction (XRD) (Bruker D8 (Billerica, MA, USA) advance with Cu Kα radiation (λ = 1.5418 Å) with an operating voltage of 40 kV). The morphological and microstructural analyses were performed using a high-resolution scanning electron microscope (HR-SEM) and a high-resolution transmission electron microscope (HR-TEM). For the HR-SEM analysis, FEI, Magellan 400L (Hillsboro, OR, USA) was utilized to obtain the morphology and topography. HR-TEM analysis was carried in (JEOL JEM-2100, Tokyo, Japan). Before the TEM analysis, we dispersed 0.5 mg of the sample (scratched from Cu foil) in 15 mL absolute ethanol, and then we drop-casted a drop of solution onto Ni grid and kept it for drying at ambient conditions. The Kratos axis HS spectrometer (Kratos, San Diego, CA, USA), combined with monochromatic (Al) and dual (Mg/Al) source, was utilized for the XPS analysis. We attached a pinch of sample to the sticky carbon tape for the measurement.
Electrochemical Characterization
Electrochemical experiments were performed in a standard three-electrode system using a Biologic VSP potentiostat. Pt and Hg/HgO (filled with 1 M NaOH) served as counter and reference electrodes, respectively. The nanostructured catalysts grown over Cu foil were directly used as the working electrodes without further treatment. The measured data that were used for the calculation were iR drop-corrected (iR drop was compensated concerning open-circuit potential). The whole study was conducted in an alkaline medium (1 M KOH as the electrolyte). The electrolytic solution was prepared using millipore water (resistance of 18 MΩ). The data were measured after conducting I-V polarization of the electrodes for about 100 CV cycles. The electrocatalytic activities were assessed using cyclic voltammetry (CV), chronoamperometry (CA), and electrochemical impedance spectroscopy (EIS) for the prepared catalysts.
Results and Discussion
We obtained the Cu-O, Cu 3 P, and Cu-O|Cu 3 P grown over Cu foil using an AP-CVD system at an optimized temperature of 350 • C (optical images are provided in Figure S2) and characterized them using various physical characterization techniques. Evidently, when the reaction temperature at Z2 exceeded 350 • C, the formed products started cracking from the Cu foils (as given in Figure S1), leading to the nonuniform distribution of active catalysts. The product ended up with substantial fractures and gaps when the synthesis was carried out at 500 • C, which prevented us from using these as free-standing electrodes; however, we could use them in the form of powder catalysts. The powder pattern XRD was used to identify the phase purity and crystallinity of the synthesized materials. Figure 1 confirms that the materials that were grown over Cu foil are highly crystalline in nature. After oxidation of the Cu foil, we observed that the diffraction peaks present at 2 θ of 29.6 • , 36.5 • , 42.2 • , and 61.3 • , belong to the (110), (111), (200), and (220) planes of Cu 2 O (reference code; 05-0667) respectively [48]. Further, the existence of CuO was confirmed, as the peaks located at 2 θ of 35.5 • and 38.7 • fit to the (002) and (111) planes of CuO (reference code; 45-0937). This indicates that mixed oxides such as Cu 2 O and CuO were grown over Cu foil, which we termed Cu-O. The crystal planes of the as-formed Cu 3 P were confirmed by indexing the existing peaks of Cu 3 P to their corresponding (hkl) planes (JCPDS No. 02-1263) [49]. In all cases, the peaks present at 43.45, 50.5 and 74.2 • belong to metallic Cu (due to Cu foil). The diffraction pattern (in blue color) contained peaks of Cu-O and Cu 3 P with slight peak shifting towards lower/higher 2 θ values, as shown in Figure 1.
concerning open-circuit potential). The whole study was conducted in an alkaline me dium (1M KOH as the electrolyte). The electrolytic solution was prepared using millipore water (resistance of 18 MΩ). The data were measured after conducting I-V polarization o the electrodes for about 100 CV cycles. The electrocatalytic activities were assessed using cyclic voltammetry (CV), chronoamperometry (CA), and electrochemical impedance spec troscopy (EIS) for the prepared catalysts.
Results and Discussion
We obtained the Cu-O, Cu3P, and Cu-O|Cu3P grown over Cu foil using an AP-CVD system at an optimized temperature of 350 °C (optical images are provided in Figure S2 and characterized them using various physical characterization techniques. Evidently when the reaction temperature at Z2 exceeded 350 °C, the formed products started crack ing from the Cu foils (as given in Figure S1), leading to the nonuniform distribution o active catalysts. The product ended up with substantial fractures and gaps when the syn thesis was carried out at 500 °C, which prevented us from using these as free-standing electrodes; however, we could use them in the form of powder catalysts. The powder pat tern XRD was used to identify the phase purity and crystallinity of the synthesized mate rials. Figure 1 confirms that the materials that were grown over Cu foil are highly crystal line in nature. After oxidation of the Cu foil, we observed that the diffraction peaks presen at 2 θ of 29.6°, 36.5°, 42.2°, and 61.3°, belong to the (110), (111), (200), and (220) planes o Cu2O (reference code; 05-0667) respectively [48]. Further, the existence of CuO was con firmed, as the peaks located at 2 θ of 35.5° and 38.7° fit to the (002) and (111) planes o CuO (reference code; 45-0937). This indicates that mixed oxides such as Cu2O and CuO were grown over Cu foil, which we termed Cu-O. The crystal planes of the as-formed Cu3P were confirmed by indexing the existing peaks of Cu3P to their corresponding (hkl planes (JCPDS No. 02-1263) [49]. In all cases, the peaks present at 43.45, 50.5 and 74.2 belong to metallic Cu (due to Cu foil). The diffraction pattern (in blue color) contained peaks of Cu-O and Cu3P with slight peak shifting towards lower/higher 2 θ values, a shown in Figure 1. X-ray photoelectron spectroscopy (XPS) analysis was used to examine the formed hybrid nanostructure's chemical states and surface nature (Table S1). Figure 2a stands for the 2p high-resolution XP spectrum of Cu 2p. The peaks located at the binding energy positions of 933.1 eV belong to Cu 2p 3/2 of Cu(I), which could be due to the formation of Cu 3 P and Cu 2 O. The peaks present at 935.5 eV under Cu 2p 3/2 can be assigned to the presence of Cu (II). Two satellite peaks at 939.8 eV and 943.7 eV can be attributed as CuO and Cu 3 P, respectively. The O 1s spectrum in Figure 2b indicates that the peaks that arise at 531.8 eV and 533.2 eV can be ascribed to Cu-O bonding and adsorption of oxygen on the Cu 3 P surface, respectively. As shown in Figure 2c (P 2p spectrum), the peaks located at the binding energies of 133.8 eV and 134.6 eV belong to Cu-P and P-O bonding correspondingly. Notably, the deconvolution percentage Cu-O is 58.5% while the surface adsorbed oxygen percentage is found to be 41.5%. The evaluated deconvolution percentage of Cu-P bonding is fixed at 56% while 44% of PO X species exists. Nanomaterials 2023, 13, x FOR PEER REVIEW 5 of 12 X-ray photoelectron spectroscopy (XPS) analysis was used to examine the formed hybrid nanostructure's chemical states and surface nature (Table S1) . Figure 2a stands for the 2p high-resolution XP spectrum of Cu 2p. The peaks located at the binding energy positions of 933.1 eV belong to Cu 2p3/2 of Cu(I), which could be due to the formation of Cu3P and Cu2O. The peaks present at 935.5 eV under Cu 2p3/2 can be assigned to the presence of Cu (II). Two satellite peaks at 939.8 eV and 943.7 eV can be attributed as CuO and Cu3P, respectively. The O 1s spectrum in Figure 2b indicates that the peaks that arise at 531.8 eV and 533.2 eV can be ascribed to Cu-O bonding and adsorption of oxygen on the Cu3P surface, respectively. As shown in Figure 2c (P 2p spectrum), the peaks located at the binding energies of 133.8 eV and 134.6 eV belong to Cu-P and P-O bonding correspondingly. Notably, the deconvolution percentage Cu-O is 58.5% while the surface adsorbed oxygen percentage is found to be 41.5%. The evaluated deconvolution percentage of Cu-P bonding is fixed at 56% while 44% of POX species exists. The chemical composition and morphology of the formed Cu-O|Cu3P hetero-structure was examined by high-resolution scanning electron microscopy (HR-SEM) and highresolution transmission electron microscopy (HR-TEM) ( Figure S5). The HR-SEM of both low-and high-resolution images (Figure 3a,b) show the formation of distinct nanocrystals with flat faces. Mostly, the nanocrystals are irregular polygon types, where the edges and corners are a little truncated. At higher resolution, the significant number of pores and voids on the structure can be seen (represented in Figure 3c), confirming the defectiveness of the structure. The selected area diffraction (SAED pattern) reveals that the material is polycrystalline in nature, as provided in the inset of Figure 3c. Figure 3d denotes the lattice fringes of the formed material; the higher d-spacing could be indexed to Cu-O and the lower d-spacing value could be due to Cu3P. Moreover, the heterointerface was observed in the structure, due to the presence of two different orientations of lattice fringes separated by a border line (which we marked with a yellow dotted line). The chemical composition and morphology of the formed Cu-O|Cu 3 P hetero-structure was examined by high-resolution scanning electron microscopy (HR-SEM) and highresolution transmission electron microscopy (HR-TEM) ( Figure S5). The HR-SEM of both low-and high-resolution images (Figure 3a,b) show the formation of distinct nanocrystals with flat faces. Mostly, the nanocrystals are irregular polygon types, where the edges and corners are a little truncated. At higher resolution, the significant number of pores and voids on the structure can be seen (represented in Figure 3c), confirming the defectiveness of the structure. The selected area diffraction (SAED pattern) reveals that the material is polycrystalline in nature, as provided in the inset of Figure 3c. Figure 3d denotes the lattice fringes of the formed material; the higher d-spacing could be indexed to Cu-O and the lower d-spacing value could be due to Cu 3 P. Moreover, the heterointerface was observed in the structure, due to the presence of two different orientations of lattice fringes separated by a border line (which we marked with a yellow dotted line).
The STEM mapping image shows the distribution of elements (Figure 4), and the individual mapping images show that Cu occupies the whole part of the structure (Figure 4a). O and P are also confined everywhere in the structure, as is Cu (Figure 4b,c). However, the density of phosphorous seems to be greater than oxygen, which indicates that the formation of Cu 3 P is dominated by Cu-O. The HR-TEM of the post-MOR sample is provided in Figure S6 in order to observe the changes. The post-MOR of Cu-O|Cu 3 P was analyzed using HR-TEM. Figure S6a,b shows the low-and high-resolution TEM images, respectively. The morphology resembles a sheet-type structure, as shown in Figure S6a. Significant cracks and wrecks were observed for Cu-O|Cu 3 P (high-resolution image) owing to the strong oxidation environment in the alkaline electrolyte-containing methanol fuel. The EDS in Figure S6c demonstrates the presence of Cu, O, and P (K stems from the electrolyte that we used in the study; KOH). Figure S6d shows that after MOR in alkaline solution, the planes of Cu-O|Cu 3 P are significantly disordered and damaged. Nanomaterials 2023, 13, x FOR PEER REVIEW 6 of 12 The STEM mapping image shows the distribution of elements (Figure 4), and the individual mapping images show that Cu occupies the whole part of the structure ( Figure 4a). O and P are also confined everywhere in the structure, as is Cu (Figure 4b,c). However, the density of phosphorous seems to be greater than oxygen, which indicates that the formation of Cu3P is dominated by Cu-O. The HR-TEM of the post-MOR sample is provided in Figure S6 in order to observe the changes. The post-MOR of Cu-O|Cu3P was analyzed using HR-TEM. Figure S6a,b shows the low-and high-resolution TEM images, respectively. The morphology resembles a sheet-type structure, as shown in Figure S6a. Significant cracks and wrecks were observed for Cu-O|Cu3P (high-resolution image) owing to the strong oxidation environment in the alkaline electrolyte-containing methanol fuel. The EDS in Figure S6c demonstrates the presence of Cu, O, and P (K stems from the electrolyte that we used in the study; KOH). Figure S6d shows that after MOR in alkaline solution, the planes of Cu-O|Cu3P are significantly disordered and damaged.
Electrocatalytic MOR Study
The electrocatalytic activity of the synthesized electrodes towards anodic OER and MOR have been studied in both CH3OH-free and 1M CH3OH-containing 1M KOH (pH = 14) electrolytes. Here, we used 1M CH3OH as the fuel, as it is reported that concentrated methanol solution improves the power density of DMFCs and reduces the cell size [50]. We carried out cyclic voltammetry (CV) to probe the catalytic activity at a sweep rate of 20 mVs −1 . The MOR activities of the three studied catalysts are shown in Figure 5a
Electrocatalytic MOR Study
The electrocatalytic activity of the synthesized electrodes towards anodic OER and MOR have been studied in both CH 3 OH-free and 1 M CH 3 OH-containing 1 M KOH (pH = 14) electrolytes. Here, we used 1 M CH 3 OH as the fuel, as it is reported that concentrated methanol solution improves the power density of DMFCs and reduces the cell size [50]. We carried out cyclic voltammetry (CV) to probe the catalytic activity at a sweep rate of 20 mVs −1 . The MOR activities of the three studied catalysts are shown in Figure 5a. Cu-O exhibited a MOR onset potential of 0.51 V vs. Hg/HgO and achieved a current density (j) of 10 and 50 mAcm −2 at 0.52 V and 0.61 V vs. Hg/HgO, respectively. The Cu 3 P showed a MOR onset potential of 0.47 V vs. Hg/HgO, which is just 10 mV lower than the onset potential observed in the case of Cu-O|Cu 3 P. Cu 3 P showed 50 and 100 mAcm −2 at the potential of 0.56 and 0.60 V vs. Hg/HgO. Cu 3 P|Cu-O delivered the j of 50 and 100 mAcm −2 at 0.55 and 0.58 V vs. Hg/HgO. Further, Cu 3 P|Cu-O shows the maximum anodic current density (j) of 232.5 mAcm −2 at the vertex potential 0.65 V vs. Hg/HgO, while the j max for Cu-O and Cu 3 P were observed as 170 and 91 mAcm −2 , respectively, at the same potential. This confirms that the heterostructure exhibits a tremendous impact on MOR in an alkaline medium. In Figure 5b, we observe that without methanol, Cu-O|Cu 3 P exhibits a high OER onset potential of 0.68 V vs. Hg/HgO with a low current density, which differentiates the performance of OER from MOR. semicircle formed by Cu-O Cu3P| is quite smaller compared to the semicircles of Cu3P and Cu-O, which points to the improved electronic conductivity of Cu-O|Cu3P. Semicircles such as the one at higher frequency and another at low frequency are attributed to the presence of two types of resistance and two-time constants (Figure 5d). The equivalent circuit diagram obtained from the fitted Nyquist plot is depicted in Figure S3. The correct way to carry out the impedance simulation is pivotal to obtain accurate data. Necessarily, we replaced the capacitor (C) with the constant phase element (Q) while fitting the data in the equivalent circuit diagram. It is mostly accepted that the presence of Q and the depressed semicircles was observed in cases of solid electrodes due to the surface roughness, and thus C can be only used in ideal cases [51]. In this case, R1, R2, and R3 denote solution resistance, resistance due to the metal oxidation/adsorbed species on the metal surface, and the charge transfer resistance, respectively. The charge transfer resistance in this case signifies the rate at which the charge transfer occurs during MOR (how fast) with changing the electrode potential by the time the surface coverage by the intermediate remains constant. Cu-O|Cu3P exhibits a smaller charge transfer resistance of 1.5 ohm, indicating a faster MOR kinetics (details of the fitting parameters are given in Figure S3). The electrochemical active surface area (ECSA) of the three materials were evaluated using double-layer capacitance (Cdl), which we obtained from the cyclic voltammetry (CV) We examined the electrocatalytic stability and catalyst poisoning by CO for the electrodes during MOR using chronoamperometry (CA) tests at constant applied potentials. Cu-O showed a significant drop in current density over a period of 3600 s, as shown in Figure 5c; this behavior could be attributed to catalyst poisoning, which indicates poor MOR activity. Similarly, in the case of Cu 3 P, the j over time decreased from 32 mAcm −2 (at 10 s) to 30.5 mAcm −2 (at 3600 s), also indicating low stability against the hazardous intermediates of methanol oxidation. However, the stability behavior exhibited by Cu-O|Cu 3 P showed a quite different phenomenon. The current started increasing up to 2500 s, and then the current density reached a steady value up to 3600 s, without any loss (j = 44.5 mAcm −2 from 2500-3600 s). The improved stability of Cu 3 P|Cu-O indicates that the combination of Cu 3 P and Cu-O substantially improves the catalyst poisoning tolerance activity. Usually, at the starting of the reaction, each catalyst shows fast kinetics, as the active sites are free from the adsorbed methanol molecules at the surface. Then, methanol molecules adsorb on the electrocatalytic sites via methanol oxidation, or form intermediate species such as CO, CH x , and CH 2 O after several minutes (assumed as the rate-determining step), which contribute to the catalyst poisoning. The noticeable decrease in the current density could be attributed to the poisoning of the catalysts by hazardous chemical species [50].
Electrochemical impedance spectroscopy (EIS) is a crucial electrochemical measurement technique that helps to identify the kinetic and mass transport phenomena. We carried out EIS measurements in the frequency range of 100 kHz to 100 mHz with an applied amplitude of 10 mV at the faradic potentials (corresponding MOR potentials of 0.57 V vs. RHE). Cu-O Cu 3 P|, Cu 3 P, and Cu-O showed two depressed semicircles without any indication of diffusion phenomena, as shown in the Nyquist plot ( Figure 5d). The arc of the semicircle formed by Cu-O Cu 3 P| is quite smaller compared to the semicircles of Cu 3 P and Cu-O, which points to the improved electronic conductivity of Cu-O|Cu 3 P. Semicircles such as the one at higher frequency and another at low frequency are attributed to the presence of two types of resistance and two-time constants (Figure 5d). The equivalent circuit diagram obtained from the fitted Nyquist plot is depicted in Figure S3. The correct way to carry out the impedance simulation is pivotal to obtain accurate data. Necessarily, we replaced the capacitor (C) with the constant phase element (Q) while fitting the data in the equivalent circuit diagram. It is mostly accepted that the presence of Q and the depressed semicircles was observed in cases of solid electrodes due to the surface roughness, and thus C can be only used in ideal cases [51]. In this case, R1, R2, and R3 denote solution resistance, resistance due to the metal oxidation/adsorbed species on the metal surface, and the charge transfer resistance, respectively. The charge transfer resistance in this case signifies the rate at which the charge transfer occurs during MOR (how fast) with changing the electrode potential by the time the surface coverage by the intermediate remains constant. Cu-O|Cu 3 P exhibits a smaller charge transfer resistance of 1.5 ohm, indicating a faster MOR kinetics (details of the fitting parameters are given in Figure S3).
The electrochemical active surface area (ECSA) of the three materials were evaluated using double-layer capacitance (C dl ), which we obtained from the cyclic voltammetry (CV) plots in the nonfaradic regions ( Figure S4). We measured the CVs for these catalysts in a potential range of −0.2-0 V at scan rates ranging from 20 to 160 mVs −1 . The calculated C dl for Cu-O, Cu 3 P, and Cu-O|Cu 3 P are 0.4, 1.3, and 5 mFcm −2 , respectively. ECSA can be obtained from C dl /C s , where the specific capacitance (Cs) for the flat electrodes is 40 µF cm −2 . Thus, the ECSAs for Cu-O, Cu 3 P, and Cu-O|Cu 3 P are given as 10, 32.5, and 125 cm 2 , respectively. The roughness factor (RF) can be calculated from ECSA as RF = ECSA/A geo (geometrical area of the substrate which is 1 cm 2 for all electrodes in this study). Cu-O, Cu 3 P, and Cu-O|Cu 3 P exhibit RFs of 10, 32.5, and 125, respectively. The heterostructure displays the highest double-layer capacitance, ECSA, and RF values, which has good agreement with the MOR catalytic activity.
The electrocatalytic MOR performance of the Cu-O|Cu 3 P heterostructure that is grown over Cu foil is much better than some earlier reports, such as Cu(OH) 2 −CuO nanoneedle array on the Cu foil [52]; NPC dealloyed by Cu 66 Ti 30 Ni 4 amorphous ribbon [40], Ni-MgO/C [53], NF-converted NiO/NF [54] hierarchical porous Co 3 O 4 /NiCo 2 O 4 [55], and ZnCo 2 O 4 NPs on Ni foam [56], as given in Table 1. Cu 3 P|Cu-O achieves the high current density of 232.5 V at the low potential of 0.65 V vs. Hg/HgO, while other reported self-standing electrodes achieve high current densities at higher potential. The high catalytic reactivity of Cu-O|Cu 3 P towards MOR could be attributed to (1) the direct growth of Cu-O|Cu 3 P on Cu foil providing close bonding between substrate and catalyst, excellent adhesion, and effective electrical contact; (2) the defective, 3D pattern of the nanocrystals and synergistic interaction between Cu-O and Cu 3 P facilitating adequate passage of reactants and products; and (3) binder-free strategy further avoiding an unnecessary increase in iR drop and catalyst loss during reaction, and thus improving the catalytic activity. Thus, the practical application of DMFCs depends on the development of efficient anode catalysts that efficiently catalyze methanol oxidation reaction.
Conclusions
Using a facile synthesis, we fabricated a self-supported electrode comprising a Cu-O| Cu 3 P heterostructure on Cu foil. The integrated architecture demonstrates enhanced electronic conductivity, remarkable electrocatalytic performance towards the electrooxidation of methanol, and good reaction stability. Specifically, it exhibits a small charge transfer resistance, low MOR onset potential, higher current density, and good MOR diffusion kinetics compared to the individual Cu-based synthesized catalysts. This electrocatalytic performance improvement for Cu-O|Cu 3 P can be credited to the synergistic or interfacial effect exhibited by the heterointerfaces. The abundance of copper and the scalability of the synthesis process, along with the high anodic output, promote its implementation in DMFCs. This study might be useful for developing Cu-based heterostructures or core-shell structures to obtain optimal activity for energy conversion applications. | 2023-04-02T15:26:25.731Z | 2023-03-30T00:00:00.000 | {
"year": 2023,
"sha1": "dfd84a283c0163d5d27e366d8b5fd62384aa37cf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/13/7/1234/pdf?version=1680232413",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da79e5f0c2df4f52d9e4681e420e2c43cffdd240",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3948011 | pes2o/s2orc | v3-fos-license | Hot Speech and Exploding Bombs: Autonomic Arousal During Emotion Classification of Prosodic Utterances and Affective Sounds
Emotional expressions provide strong signals in social interactions and can function as emotion inducers in a perceiver. Although speech provides one of the most important channels for human communication, its physiological correlates, such as activations of the autonomous nervous system (ANS) while listening to spoken utterances, have received far less attention than in other domains of emotion processing. Our study aimed at filling this gap by investigating autonomic activation in response to spoken utterances that were embedded into larger semantic contexts. Emotional salience was manipulated by providing information on alleged speaker similarity. We compared these autonomic responses to activations triggered by affective sounds, such as exploding bombs, and applause. These sounds had been rated and validated as being either positive, negative, or neutral. As physiological markers of ANS activity, we recorded skin conductance responses (SCRs) and changes of pupil size while participants classified both prosodic and sound stimuli according to their hedonic valence. As expected, affective sounds elicited increased arousal in the receiver, as reflected in increased SCR and pupil size. In contrast, SCRs to angry and joyful prosodic expressions did not differ from responses to neutral ones. Pupil size, however, was modulated by affective prosodic utterances, with increased dilations for angry and joyful compared to neutral prosody, although the similarity manipulation had no effect. These results indicate that cues provided by emotional prosody in spoken semantically neutral utterances might be too subtle to trigger SCR, although variation in pupil size indicated the salience of stimulus variation. Our findings further demonstrate a functional dissociation between pupil dilation and skin conductance that presumably origins from their differential innervation.
INTRODUCTION
Emotional expressions conveyed by face, voice and in body gestures are strong social signals and might serve as emotion-elicitors in a spectator or listener. Situations that are of relevance for someone's wellbeing or future prospects, such as meeting an aggressor on the street, possess an emotional meaning that has the power to trigger emotions in the beholder. Bodily reactions, one of the key components of emotion (Moors et al., 2013), are regulated by the autonomous nervous system (ANS), and include changes in the cardiovascular system, in respiration and perspiration (Kreibig, 2010). While autonomic responses to affective pictures and sounds have been reliably demonstrated (e.g., Bradley et al., 2001a), only little is known about ANS responses to emotional expressions, in particular with regard to spoken language. Emotional expressions in the voice, however, are of special relevance considering that speech might be the most important communication channel in humans. Our study therefore had two main aims; first, we investigated autonomic activation in response to spoken utterances of neutral semantic content but varying in their emotional prosody, and second, we compared these responses to those triggered by another auditory domain, namely affective sounds.
There are various physiological indicators reflecting autonomic responses during emotion processing. Skin conductance responses (SCRs) are one of the most frequently used peripheral physiological markers; presumably because they are exclusively activated by the sympathetic nervous system and because they are robust against voluntary modulations. Thus, they can be assumed to provide an excellent measure for the elicitation of emotional arousal (Dawson et al., 2007). Another promising indicator of even unconscious and subtle changes of emotional arousal are changes of the pupil size during stimulus processing (Laeng et al., 2012). The size of the pupil diameter is controlled by two muscles, innervated by both sympathetic and parasympathetic branches of the ANS that receive input from parts of the central nervous system involved in cognitive and affective processing (e.g., Hoeks and Ellenbroel, 1993). A vast body of research has suggested that pupillary responses serve as a potent measure for top-down and bottom-up attention (e.g., Laeng et al., 2012;Riese et al., 2014), both with regard to emotional and motivational processing (e.g., Bayer et al., 2010Bayer et al., , 2017aBradley et al., 2008;Partala and Surakka, 2003;Võ et al., 2008) and cognitive load (e.g., Stanners et al., 1979;Verney et al., 2001;Nuthmann and Van der Meer, 2005;Van der Meer et al., 2010). An increased attention or mental effort is accompanied by enlarged pupil dilations: the more attention, the larger the pupil size. During emotion perception and emotion recognition, pupil dilation can be influenced by both emotion-based and cognitive factors. The simultaneous consideration of SCRs and changes of the pupil size might therefore help to separate the emotion-related from the cognitive sub-processes during processing of emotional information.
Affective pictures or sounds, mainly representing violence and erotica, have been shown to robustly increase SCRs and pupil dilations of the perceiver (Partala and Surakka, 2003;Bradley et al., 2008;Lithari et al., 2010). While the processing of emotional expressions has been shown to evoke emotion-related pupil size changes (see Kuchinke et al., 2011 for prosodic stimuli, Laeng et al., 2013 for faces), evidence for increased SCRs to emotional expressions is, however, less clear (Alpers et al., 2011;Aue et al., 2011;Wangelin et al., 2012). Alpers et al. (2011) and Wangelin et al. (2012) directly compared SCRs to emotional faces and affective scenes. Both studies found increased SCRs to arousing scenes compared to neutral ones, but not in response to facial expressions of emotion. In contrast, Merckelbach et al. (1989) reported stronger SCRs to angry compared to happy faces, while Dimberg (1982) did not find any differences between the two conditions. SCRs to emotional prosody have been even less investigated: Aue et al. (2011) studied the influence of attention and laterality during processing of angry prosody. Compared to neutrally spoken non-sense words, the angry speech tokens caused higher SCRs. In line with this finding, Ramachandra et al. (2009) demonstrated that nasals pronounced in an angry or fearful tone of voice elicit larger SCRs in the listener than neutrally pronounced ones, but their stimulus set only consisted of an extremely limited number of stimuli. A direct comparison between ANS responses to prosodic utterances vs. affective sounds, both conveying emotional stimuli of the same modality, has not been conducted so far.
The inconsistencies in the studies mentioned above might be explained by the absence of contexts, in which the stimuli were presented to the participants. Experimental setups conducted with entirely context-free presentation of emotional expressions, which are unfamiliar and also unimportant for the participants may simply reduce the overall social relevance of these stimuli and therefore fail to trigger robust emotion-related bodily reactions. In a recent study, Bayer et al. (2017b) demonstrated the importance of context. The authors observed increased pupil dilations to sentence-embedded, written words of emotion content in semantic contexts of high individual relevance. Similarly, perceived similarity to a person in distress increases emotional arousal in a bystander (Cwir et al., 2011). In general, sharing attitudes, interests, and personal characteristics with another person have been shown to immediately create a social link to that person (Vandenbergh, 1972;Miller et al., 1998;Jones et al., 2004;Walton et al., 2012). We therefore intended to vary the relevance of speech stimuli by embedding them into context and manipulating the idiosyncratic similarity between the fictitious speakers and participants.
The first aim of the present study was to test whether spoken utterances of varying emotional prosody trigger arousal-related autonomic responses, measured by pupil dilation and skin conductance in an explicit emotion categorization task. We increased the social relevance of our speech samples by providing context information with manipulated personal similarity in terms of biographical data between the participant and a fictitious speaker. Second, we examined participants' physiological responses to affective sounds in comparison to the prosodic utterance. These affective sounds were for instance exploding bombs, or applause. Based on previous findings on emotional stimuli in the visual modality, we predicted stronger arousal-related effects for the affective sounds than for prosodic stimuli. Finally, we implemented a speeded reaction time task on the prosodic and sound stimuli in order to disentangle the cognitive and emotion-based modulations of the two physiological markers, by examining the cognitive difficulties during explicit recognition of the prosodic utterances and affective sounds.
Ethics Statement
The present study was approved by the local ethics committee of the Institute of Psychology at the Georg-August-Universität Göttingen. All participants were fully informed about the procedure and gave written informed consent prior to the experiment.
Participants
Twenty-eight female German native speakers, ranging in age between 18 and 29 years (M = 22.8), participated in the main study. The majority of participants (23 out of 28) were undergraduates at the University of Göttingen, three just finished their studies and two worked in a non-academic profession. Due to technical problems during recordings, two participants had to be excluded from analyses of pupil data. We restricted the sample to female participants in order to avoid sex-related variability in emotion reactivity (Bradley et al., 2001b;Kret and De Gelder, 2012).
Spoken Utterances With Emotional Prosody
The emotional voice samples were selected from the Berlin Database of Emotional Speech (EmoDB, Burkhardt et al., 2005). The database consists of 500 acted emotional speech tokens of 10 different sentences. These sentences were of neutral meaning, such as "The cloth is lying on the fridge" [German original "Der Lappen liegt auf dem Eisschrank"], or "Tonight I could tell him" ["Heute abend könnte ich es ihm sagen"]. From this database we selected 30 angry, 30 joyful, and 30 neutral utterances, spoken by five female actors. Each speaker provided 18 stimuli to the final set (6 per emotion category). The stimuli had a mean duration of 2.48 ± 0.71 s (anger = 2.61 ± 0.7, joy = 2.51 ± 0.71, and neutral = 2.32 ± 0.71), with no differences between the emotion categories (Kruskal-Wallis chi-squared = 2.893, df = 2, p = 0.24). Information about the recognition of indented emotion and perceived naturalness were provided by Burkhardt et al. (2005). We only chose stimuli that were recognized well above chance and perceived as convincing and natural (Burkhardt et al., 2005). Recognition rates did not differ between emotion categories (see Table 1 for descriptive statistics, Kruskal-Wallis chi-squared = 5.0771, df = 2, p = 0.079). Anger stimuli were, however, perceived as more convincing than joyful stimuli (Kruskal-Wallis chi-squared = 11.1963, df = 2, p = 0.004; post hoc test with Bonferroni adjustment for anger -joy p = 0.003). During the experiment, we presented prosodic stimuli preceded by short context sentences that were presented in written form on the computer screen. With this manipulation we aimed at providing context information in order to increase the plausibility of the speech tokens. These context sentences were semantically related to the prosodic target sentence and neutral in their wording, such as "She points into the kitchen and says" [German original: "Sie deutet in die Küche und sagt"] followed by the speech token "The cloth is laying on the fridge" ["Der Lappen liegt auf dem Eisschrank"] or "She looks at her watch and says" [German Burkhardt et al., 2005;b Bradley and Lang, 2007. Given are the percentage of correct recognition and the percentage of perceived naturalness of the selected prosodic stimuli (Data based on Burkhardt et al., 2005). Sounds were rated on a 1-9 likert scale (1-negative, 9-positiv/1-not aroused, 9-aroused) by Bradley and Lang (2007). Given is the mean and SD for the selected sample.
original "Sie blickt auf die Uhr und sagt"] followed by the speech "It will happen in 7 h" ["In sieben Stunden wird es soweit sein"].
Affective Sounds
Forty-five affective sounds (15 arousing positive, 15 arousing negative, 15 neutral 1 ) were selected from the IADS database (International Affective Digital Sounds, Bradley and Lang, 1999). All of them had a duration of 6 s. Erotica were not used in our study as they have been shown to be processed differently compared to other positive arousing stimuli (Partala and Surakka, 2003;van Lankveld and Smulders, 2008). The selected positive and negative stimuli did not differ in terms of arousal (see Table 1 for descriptive statistics; t(27) = −0.743, p = 0.463) and were significantly more arousing than the neutral stimuli (t(25) = 12.84, p < 0.001). In terms of emotional valence, positive and negative stimuli differed both from each other (t(24) = 21.08, p < 0.001) and from the neutral condition (positive-neutral t(19) = 11.99, p < 0.001, negative-neutral t(25) = 15,15, p < 0.001), according to the ratings provided in the IADS database. Positive and negative sounds were controlled for their absolute valence value from the neutral condition (t(24) = 0.159, p = 0.875). Note that this stimulus selection was based on ratings by female participants' ratings only, provided by Bradley and Lang (2007). As the emotional sounds were rather diverse in their content, we controlled for differences in specific acoustic parameters that might trigger startle reactions or aversion and thus influence the physiological indicators used in the present study in an unintended way. These parameters included intensity, intensity onset (comprising only the first 200 ms), intensity variability (intensity standard deviation), noisiness, harmonic-to-noise Differences in one parameter across emotion are indicated by uncapitalized letters. a p < 0.1, b p < 0.05. ratio (HNR) and energy distribution (frequency at which 50% of energy distribution in the spectrum was reached). Intensity parameters were calculated using Praat (Boersma and Weenink, 2009), while noisiness, energy distribution and HNR were obtained by using LMA (Lautmusteranalyse developed by K. Hammerschmidt -Schrader and Hammerschmidt, 1997;Hammerschmidt and Jürgens, 2007;Fischer et al., 2013). We calculated linear mixed models in R to compare these parameters across the three emotion categories (see Table 2). We conducted post hoc analysis even when the general analysis was only significant at trend level. We found differences at trend level for intensity and intensity variability, and significant effects for energy distribution across the emotion categories. Differences were marginal and unsystematically spread across the categories, meaning that no emotion category accumulates all aversion related characteristics (see Table 2). Differences rather depict the normal variation when looking at complex sounds. The probability that acoustic structure confounds the physiological measure is thus low.
Similarity Manipulation
On the basis of participants' demographic data -such as first name, date and place of birth, field of study, place of domicile, living situation and hobbies -obtained prior the main experiment, we constructed personal profiles of the fictive speakers. They either resembled or differed from the participant's profile. Similarity was created by using the same gender, first name (or similar equivalents, e.g., Anna and Anne), same or similar dates and places of birth, same or similar study program, and same hobbies. Dissimilar characters were characterized by not being a student, being around 10 years older, not sharing the birth month and date, living in a different federal state of Germany, having a dissimilar first name, and being interested in different hobbies. Manipulations for every participant were done using the same scheme. The manipulation resulted in four personal profiles of (fictive) speaker characters that resembled the respective participant in her data, and four profiles that differed from the participant's profile. To detract participants from the study aim, we included trait memory tasks between acquiring the biographical information and the main experiment. Additionally, we instructed the participants to carefully read every profile that was presented during the experiment, as they later should respond to questions regarding bibliographic information.
Procedure
First, participants filled out questionnaires regarding their demography and their handedness (Oldfield, 1971). After completing the questionnaires, participants were asked to wash their hands and to remove eye make-up. Participants were then seated in a chin rest 72 cm in front of a computer screen. Peripheral physiological measures were recorded from their non-dominant hand, while their dominant hand was free to use a button box for responding. Stimuli were presented via headphones (Sennheiser, HD 449) at a volume of around 55 db. During and shortly after auditory presentation, participants were instructed to fixate a green circle displayed at the center of a screen in order to prevent excessive eye movements. The circle spanned a visual angle of 2.4 • × 2.7 • and was displayed on an equiluminant gray background. Additionally, they were asked not to move and to avoid blinks during the presentation of target sentences.
The experiment consisted of two parts. Figure 1 gives an overview about the procedure of the stimulus presentation. Within the first part, prosodic stimuli were presented. Stimuli were presented twice (once in the similar/once in the dissimilar condition), resulting in a total number of 180 stimuli. The stimulus set was divided into 20 blocks of 9 stimuli (three stimuli per emotion category that is anger, neutral, joy). All stimuli FIGURE 1 | Overview of stimulus presentation procedure. (A) One of the 20 presentation blocks created for the prosodic stimuli. All nine stimuli of one block were spoken by the same speaker, and included in randomized order three neutral, three anger and three joy sentences. (B) Stimulus presentation of sounds.
Frontiers in Psychology | www.frontiersin.org within one block were spoken by the same speaker and were presented in random order within a given block. Prior to every prosodic stimulus, a context sentence was presented for 3 s. The personal profile, which manipulated the similarity, was shown prior to each block for 6 s. Every second block was followed by a break. Rating was done 6 s after stimulus onset. Participants had to indicate the valence of each stimulus (positive, negative, and neutral) by pressing one of three buttons. In order to avoid early moving and thus assuring reliable SCR measures, the rating options appeared not until 6 s after stimulus onset and valence-button assignment changed randomly for every trial. Participants were instructed to carefully read the personal profiles and to feel into the speaker and the situation, respectively. This part lasted for about 40 min. At the end of this part, participants answered seven questions regarding bibliographic information of the fictitious speakers.
After a short break, the second part started, in which the 45 emotional sounds were presented. Every trial started with a fixation cross in the middle of the screen for 1 s. The sound was then replayed for 6 s each, while a circle was displayed on screen. When the sound finished, response labels (positive, negative, and neutral) were aligned in a horizontal row on the screen below the circle. The spatial arrangement of the response options was randomly changed for every trial; thus, button order was not predictable. The 45 emotional sounds were presented twice in two independent cycles, each time in randomized order. In analogy to the prosodic part, participants were instructed to listen carefully and to indicate the valence they intuitively associate most with the sounds without elaborative analysis of the sound's specific meaning. Short breaks were included after every 15th trial. This part of the experiment lasted for about 20 min. The experiment took approximately 60 min in total.
Psychophysiological Data Recording, Pre-processing, and Analysis
Pupil Diameter
Pupil diameter was recorded from the dominant eye using the EyeLink 1000 (SR Research Ltd.), at a sampling rate of 250 Hz. The head position was stabilized via a chin and forehead rest that was secured on the table. Prior to the experiment, the eyetracker was calibrated with a 5-point calibration, ensuring correct tracking of the participant's pupil. Offline, blinks and artifacts were corrected using spline interpolation. Data was then segmented around stimulus onset (time window: −1000 ms to 7000 ms) and referred to a baseline 500 ms prior stimulus onset. Data were analyzed in consecutive time segments of 1 s duration each. We started the analysis 500 ms after stimulus onset, to allow a short orientation phase, and ended 5500 ms afterward.
Skin Conductance
Skin conductance was recorded at a sampling rate of 128 Hz using ActivView and the BioSemi AD-Box Two (BioSemi B.V.). The two Ag/AgCl electrodes were filled with skin conductance electrode paste (TD-246 MedCaT supplies) and were placed on the palm of the non-dominant hand approximately 2 cm apart, while two additional electrodes on the back of the hand served as reference. Offline, data was analyzed using the matlab based software LedaLab V3.4.5 (Benedek and Kaernbach, 2010a). Data was down-sampled to 16 Hz and analyzed via Continuous Decomposition Analysis (Benedek and Kaernbach, 2010a). Skin conductance (SC) is a slow reacting measure based on the alterations of electrical properties of skin after sweat secretion. SC has long recovery times leading to overlapping peaks in the SC signal when SCR are elicited in quick succession. Conducting standard peak amplitude measures is thus problematic, as peaks are difficult to differentiate and subsequent peaks are often underestimated.
Benedek and Kaernbach (2010a) developed a method that separates the underlying driver information, reflecting the sudomotor nerve activity (and thus the actual sympathetic activity) from the curve of physical response behavior (sweat secretion causing slow changes in skin conductivity) via standard deconvolution. Additionally, tonic and phasic SC components are separated, to allow a focus on the phasic, event-related activity only. The phasic driver subtracted by the tonic driver is characterized by a baseline of zero. Event-related activation was exported for a response window of 1-6 s after stimulus onset, taking into account the slow signal (Benedek and Kaernbach, 2010b). Only activation stronger than 0.01 µS was regarded as an event-related response (Bach et al., 2009;Benedek and Kaernbach, 2010a). We used averaged phasic driver within the respective time window as measure for SCR. The inter stimulus interval was 2 s for sounds, as rating normally takes around 1 s; 7 s for prosodic stimuli (cf. Recio et al., 2009).
Reaction Time Task
A subset of participants (20 out of 28, aged 21-30 years, M = 24.45) was selected to participate in an additional reaction time task in order to collect behavioral speed and confidence measures of emotion recognition to additionally estimate for potential cognitive difficulties in recognizing the emotional content of stimuli. These measures could not be obtained during the main experiment due to the physiological recordings that were accessed from the non-dominant hand and due to the pupillary recordings that forbid blinks during the critical time window. This part of the study was conducted with a delay of 6 months after the main experiment to ensure that participants did not remember their previous classifications of the stimulus materials. Participants sat in front of a computer screen, and listened to the acoustic stimuli via headphones. They were first confronted with the emotional sounds (first part) in a randomized order and were instructed to stop the stimulus directly as fast as they had recognized the emotion within a critical time window of 6 s. The time window was in accordance to the one in main experiment and corresponded to the durations of sounds. After participants pushed a button, reflecting the time needed for successful emotion recognition, they had to indicate which emotion they perceived (positive, negative, and neutral) and how confident they were in their recognition (likert-scale 1-10), both by paper-pencil. The next trial started after a button press. In the second part, they listened to the prosodic stimuli that had to be classified as expressing joy, anger, or neutral, respectively, within the same procedure as in FIGURE 2 | Emotion recognition for prosody (A) and sounds (B). Given are the mean values ± 95% CI. Asterisks mark the significance level: * p < 0.05, * * p < 0.01, * * * p < 0.001.
FIGURE 3 | Skin conductance response for the prosodic stimuli (A) and the sounds (B).
Given is the mean ± 95% CI phasic driver activity within the response window of 1-6 s after stimulus onset. Asterisks mark the significance levels of the post hoc tests * p < 0.05, * * p < 0.01, * * * p < 0.001. the first part. The critical time window was again 6 s after stimulus onset.
Statistical Analysis
Statistical analyses were done in R (R Developmental Core Team, 2012). The similarity manipulation was included into the statistics to account for potential effects of this manipulation. Additionally, this could be seen as a manipulation check. To test the effects of emotion category and similarity on recognition accuracy we built a generalized linear mixed model with binomial error structure (GLMM, lmer function, R package lme4, Bates et al., 2011). Effects on SCRs and pupil size were analyzed using linear mixed models (LMM, lmer function). Models included emotion category, similarity, and the interaction between these two as fixed factors and participant-ID as random factor, to control for individual differences. All models were compared to the respective null model including the random effects only by likelihood ratio tests (function anova). Additionally, we tested the interaction between emotion category and similarity by comparing the full model including the interaction with the reduced model excluding the interaction. We used the model without interaction when appropriate. Models for the emotional sounds included only emotion category as fixed factor and participant-ID as random effect. The models were compared to the respective null models by likelihood ratio tests. Normal distribution and homogeneity of variance for all models were tested by inspecting Quartile-Quartile-Plots (QQ-plots) and residual plots. SCR data deviated from normal distribution and were log transformed. Pairwise post hoc tests were conducted using the glht function of the multcomp package (Hothorn et al., 2008) with Bonferroni correction. In the reaction time task, we did not compare prosody and sounds statistically, knowing about the differences in stimulus length, quantity of stimuli and, regarding the broader perspective, the stimulus structure overall . Reaction time data was not normally distributed and was thus log transformed prior to the analysis. Recognition accuracy and reaction time data were only calculated for those stimuli that were responded to within the time window of 6 s, whereas certainty ratings were analyzed for all stimuli in order to not overestimate the ratings. We tested the effect of emotion category on recognition accuracy (using GLMM), reaction time (using a LMM), and certainty ratings (using a cumulative link mixed model for ordinal data, package ordinal, Christensen, 2012) for both prosodic stimuli and emotional sounds. The models include emotion category as fixed factor and participant-ID as random effect. The models were compared to the respective null models by likelihood ratio tests. Pairwise post hoc tests were conducted using the glht function with Bonferroni correction for recognition accuracy and reaction time. As cumulative link models cannot be used in the glht post hoc tests, we used the single comparisons of the model summary, and conducted the Bonferroni correction separately.
In addition to analyzing the emotion recognition rates in the main experiment and the reaction time task, we also calculated the unbiased hit rates (Hu scores, Wagner, 1993). Recognition rates mirror the listener's behavior in the actual task, but might be affected by the participant's bias to preferentially choose one response category. Unbiased hit rates account for the ability of a listener to distinguish the categories by correcting for a potential bias (Wagner, 1993;cf. Rigoulot et al., 2013;Jürgens et al., 2015). We descriptively report the Hu scores in order to provide a complete description of the recognition data, but focused the further analyses on recognition rates only.
The unbiased hit rates demonstrated that listeners had a generally high recognition ability: Hu anger : 0.872 ± 0.122; Hu neutral : 0.810 ± 0.190, Hu joy : 0.896 ± 0.099 (Mean ± SD). Interestingly, anger does not stick out here, indicating that the high recognition rates of anger might be influenced by a slight bias to rather choose anger as a response, independent of the true emotion category.
Affective Sounds
The emotional content of sounds was less accurately recognized than emotional prosody of spoken utterances, with an overall recognition accuracy of around 65% (see Figure 2B). Emotion had a significant influence on the recognition, as indicated by the comparison of full model and null model: χ 2 = 167.52, df = 2, p < 0.001. With a recognition accuracy of about 52%, neutral sounds had the worst recognition accuracy (negative vs. neutral: z = 12.972, p < 0.001; negative vs. positive: z = 8.397, p < 0.001; positive -neutral: 4.575, p < 0.001). The Hu scores revealed a low ability of the participants to distinguish the emotion categories: Hu negative : 0.554 ± 0.147, Hu neutral : 0.323 ± 0.146, Hu positive : 0.453 ± 0.166 (Mean ± SD).
Spoken Utterances With Emotional Prosody
Skin conductance response (Figure 3) represented by the phasic driver activity was not affected by any of the predictors (comparison to null model χ 2 = 1.605, df = 5, p = 0.9). This part of the experiment took 40 min. To control whether participants habituated in their response to the emotions due to the long presentation time, we also analyzed only the first half of the experiment, which led to similar results (comparison to null model χ 2 = 4.910, df = 5, p = 0.43).
Spoken Utterances With Emotional Prosody
We found an effect of the predictors on pupil size for the time windows 2.5 -3.5 and 3.5 -4.5 seconds after stimulus onset (comparisons to null models, see Table 3). There was no interaction between emotion category and similarity on pupil size ( Table 3). Pupil size was affected by emotion category of speech samples (Figure 4 and Table 3). Interestingly, increases of pupil size dynamically differed between prosodic conditions: Pupil size increased fastly in response to angry stimuli, while responses to joyful stimuli were delayed by about one second (see Figure 4 and Table 4). Neutral stimuli triggered the weakest pupil response in comparison to anger and joy (Figure 4 and Table 4). The similarity condition had no effect on pupil size for the respective time windows (model comparisons χ 2 < 1.16, df = 1, p > 0.28).
Affective Sounds
The pupil size was affected by emotional content of sounds in three time windows (Table 3 and Figure 4). Post hoc tests revealed that negative sounds elicited a stronger pupil size response compared to positive sounds (Table 4). Differences between negative and neutral sounds almost reached significance. Our results indicate that pupil dilation does not purely reflect arousal differences.
FIGURE 5 | Emotion recognition during Reaction Time Task. The first column (A,C,E) depicts the results of the emotional prosody with the emotion categories "anger," "neutral," and "joy," the second column (B,D,F) represents the sounds with the categories "negative," "neutral,"and "positive." (A,B) Correct emotion recognition (mean ± 95% CI) was calculated using stimuli that were responded to within the time window of 6 s. (C,D) Reaction time measures (mean ± CI) on stimuli that were responded to within the critical time window. (E,F) Certainty ratings (mean ± 95% CI) obtained from the 10 point likert-scale, were calculated for every stimulus. Asterisks mark the significance level: . p < 0.1, * p < 0.05, * * p < 0.01, * * * p < 0.001.
Spoken Utterances With Emotional Prosody
Participants responded within the specified time window in 90% of all cases (anger: 91%, neutral: 90%, joy: 89%). We calculated recognition accuracy and reaction times only for these trials. Emotion category had an influence on emotion recognition accuracy (comparison to null model χ 2 = 26.39, df = 2, p < 0.001), reaction time (χ 2 = 42.29, df = 2, p < 0.001), and the certainty ratings (LR.stat = 21.50, df = 2, p < 0.001). Joy was recognized significantly less accurately (91%) and more slowly (M = 2022 ms, see Figure 5 and Table 5). In the certainty ratings, however, judgments for joy did not differ from anger expressions. The unbiased hit rates also demonstrated that the listeners had high recognition ability, indicating that the prosodic utterances could be distinguished easily: Hu anger : 0.903 ± 0.112; Hu neutral : 0.924 ± 0.120, Hu joy : 0.903 ± 0.106 (Mean ± SD). Differences in the recognition rates and the Hu scores of this and the main experiment, might be caused by the fact that only stimuli were entered into this analysis that were responded to within the specified time window.
DISCUSSION
The present study aimed at investigating the elicitation of arousal-related autonomic responses to emotional prosody of spoken utterances in comparison to affective sounds during explicit emotion decisions. As predicted, affective sounds elicited arousal in the perceiver, indicated by increased SCRs to negative and positive sounds as well as enlarged pupil dilations to negative stimuli. Listening to angry and joyful prosodic utterances led to increased pupil dilations but not to amplified SCRs. Biographical similarity between the fictitious speaker and listener employed to increase the social relevance of the spoken stimulus material was ineffective to boost the arousal responses of the listeners. First of all, our findings indicate that the cues determining emotional prosody in spoken, semantically neutral utterances might be too subtle to trigger physiological arousal to be reflected in changes of electrodermal activity (cf. Levenson, 2014) (see Figure 3). These results are in accordance with previous studies on facial expressions that were presented without social context (Alpers et al., 2011;Wangelin et al., 2012). The finding of arousal-related SCRs to affective sounds demonstrated that our participants were generally able to respond sympathetically to auditory stimuli in a lab environment and, importantly, confirmed previous results (Bradley and Lang, 2000).
Emotional prosody differentially affected pupil size reflected in larger dilations for utterances spoken with angry or joyful prosody, which is in line with a study reported by Kuchinke et al. (2011) (see Figure 4). Since pupil responses have been demonstrated to reflect the dynamic interplay of emotion and cognition and can thus not only be related to arousal (cf. Bayer et al., 2011), our finding of different effects on pupil size and SCRs is not surprising (see also Urry et al., 2009). Instead, it provides additional evidence that SCRs and pupil responses reflect functionally different emotion-and cognitionrelated ANS activity. Another previous finding supports the idea that pupil dilation merely reflects cognitive effects on emotional processing. In a study by Partala and Surakka (2003), emotional sounds, taken from the same data base as in our study, triggered stronger pupil dilations for both negative and positive compared to neutral sounds. In our study, however, emotionally negative sounds elicited larger dilations compared to positive sounds, with neutral in between (see Figure 4). Procedural differences might explain the inconsistencies between the present and the previous study, as Partala and Surakka did not employ an explicit emotion task. Similar arguments have been made by Stanners et al. (1979), suggesting that changes in pupil size only reflect arousal differences under conditions of minimal cognitive effort. In our study, for both domainsemotional sounds and prosodic utterances -participants had to explicitly categorize the emotional content or prosody of each stimulus. Since accuracy rates provide rather unspecific estimates of cognitive effort, the additional speeded decision task employed in our study, allowed us to analyze the difficulties in recognition of emotional content of both prosody and affective sounds in more detail (see Figure 5). Enhanced difficulties in recognizing neutral sounds might explain the unexpected pattern of findings, where neutral sounds elicited larger pupil dilations compared to positive sounds. The detailed analysis of participants' recognition ability suggests that the recognition of vocally expressed emotions does not require large cognitive resources in general as recognition was quick and accurate. In this case, the impact of emotion on pupil size might be basically caused by arousal (Stanners et al., 1979), even though the arousal level might not have reached a sufficient level to elicit SCRs. While cognitive task effects on SCRs have been demonstrated before (e.g., Recio et al., 2009), we find it unlikely that the SCR modulations in our study reflect cognitive task effects. In our data, the neutral sounds were recognized worst. If the task effects would have affected the SCRs, the responses to neutral sounds should then be increased for neutral sounds compared to affective sounds. The temporal recognition pattern, with neutral classified the most quickly, followed by anger, and joy, classified with the longest delay, fits to the reaction time data found in studies using a gating paradigm Rigoulot et al., 2013). The different recognition times might also explain the delay in pupil dilation to joyful prosody. The different recognition times might also explain the delay in pupil dilation to joyful prosody.
Our results raise the question why the processing and classification of affective sounds triggered stronger physiological responses in contrast to emotional prosody (cf. Bradley and Lang, 2000), especially since emotional expressions are presumed to possess a high biological relevance (Okon-Singer et al., 2013). The variation in affective processing of sounds and prosodic utterances might be explained by overall differences between the two stimulus domains. For visual emotional stimuli, Bayer and Schacht (2014) described two levels of fundamental differences between the domains that render a direct comparison almost impossible, namely physical and emotion-specific features. Similar aspects can also be applied to the stimuli used in the present study. Firstly, at the physical level, emotional sounds are more variable in their acoustic content than the spoken utterances. Sounds were hence more diverse, while prosodic emotional expressions vary only in a few acoustic parameters (Hammerschmidt and Jürgens, 2007, see also Jürgens et al., 2011Jürgens et al., , 2015. Secondly, there are strong differences regarding their emotion-specific features. While pictures and sounds have a rather direct emotional meaning, an emotional expression primarily depicts the expresser's emotional appraisal of a given situation, rather than the situation itself. Emotional expressions thus possess rather indirect meaning (cf. Walla and Panksepp, 2013). Additionally, our prosodic utterances consisted of semantically neutral sentences. There is evidence that although emotional prosody can be recognized irrespectively of the actual semantic information of the utterance Jürgens et al., 2013), semantics seem to outweigh emotional prosodic information when presented simultaneously (Wambacq and Jerger, 2004;Kotz and Paulmann, 2007). Vocal expressions in daily life are rarely expressed without the appropriate linguistic content. Regenbogen et al. (2012), for example, demonstrated that empathic concern is reduced when speech content is neutralized. Prosody is an important channel during emotion communication, but semantics and context might be even more important than the expression alone. Findings might thus be different if the prosodic information and the wording would have been fully consistent. So far, it seems that attending to emotional stimuli such as pictures or sounds seemingly evokes emotional responses in the encoder while attending to emotion expressions in faces or voices rather elicits recognition efforts than autonomic responses (see Britton et al., 2006, for a similar conclusion).
In our study, we aimed at improving the social relevance of speech tokens by embedding them into context and by providing biographical information about the fictive speakers in order to increase the affective reactions of participants toward these stimuli. The lack of effect in our study might indicate that biographical similarity has no effect on emotion processing. It might also be the case that our manipulation was not effective and that similarity unfolds its beneficial effect only in more realistic settings, in which an actual link between both interaction partners can be developed (see Burger et al., 2004;Cwir et al., 2011;Walton et al., 2012). Future research is needed to investigate whether social relevance in more realistic situations, such as avatars looking directly at the participants while speech tokens are presented, or utterances spoken by individually familiar people would increase physiological responsiveness to emotional prosody.
Together, we show that autonomic responses toward emotional prosodic utterances are rather weak, while affective sounds robustly elicit arousal in the listener. Furthermore, our study adds to the existing evidence that pupil size and SCRs reflect functionally different emotion-related ANS activity.
AUTHOR CONTRIBUTIONS
RJ, JF, and AS designed the study and wrote the manuscript. RJ conducted the experiments. RJ and AS conducted the data analysis. | 2018-02-28T14:06:14.973Z | 2018-02-28T00:00:00.000 | {
"year": 2018,
"sha1": "ab97ee3f46bf66606c461df0fa1d538756b8071e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00228/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab97ee3f46bf66606c461df0fa1d538756b8071e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
79495752 | pes2o/s2orc | v3-fos-license | Biopshychosocial and Economic Determinants of Personal Hygiene in the Prevention of Diarrheal Diseases in Sragen District, Central Java
Background: Poor environmental sanitation and personal hygiene have been shown to be associated with increased risk of diarrheal disease. Poor personal hygiene that is associated with an increased risk of diarrheal disease may be explained by the constructs of Health Belief Model, such as perceived susceptibility and perceived seriousness. This study aimed to examine biopshychosocial and economic determinants of personal hygiene in the prevention of diarrheal diseases.Subjects and Method: This was an analytic observational study with case control design. This study was conducted at Mondokan, Gesi, and Sambungmacan Health Centers, Sragen District, Central java, from January to March, 2017. A sample of 150 subjects, consisting of 50 cases of diarrheal disease during the past month and 100 subjects without diarrheal disease, was selected in this study by purposive sampling. The dependent variable was prevention behavior of diarrheal disease. The independent variable included perceived susceptibility, seriousness, threat, benefit, barrier, cues to action, and self-efficacy. The data was collected using a pre-tested questionnaire, and analyzed by path analysis model. Results: There were positive, and statistically significant effects of perceived seriousness (b= 0.26; SE=0.06; p= <0.001), threat (b= 0.29; SE=0.06; p= <0.001), benefit (b= 0.21; SE= 0.06; p= <0.001), barrier (b= -0.12; SE= 0.08; p= 0.032), cues to action (b= 0.17; SE= 0.07; p= 0.003), and self-efficacy (b= 0.28; SE= 0.14; p= <0.001) on prevention behavior of diarrheal disease. There were positive, indirect, and statistically significant effect of perceived susceptibility (b= 0.55; SE= 0.06; p= <0.001), seriousness (b= 0.34; SE= 0.06; p= <0.001), and benefit (b= 0.12; SE= 0.07; p= 0.025) on prevention behavior of diarrheal disease, via perceived threat.Conclusion: Perceived seriousness, threat, benefit, barrier, cues to action, and self-efficacy, are direct determinants of prevention behavior of diarrheal disease. Perceived susceptibility, seriousness, and benefit, are indirect determinants of prevention behavior of diarrheal disease.Keyword: Biopshychosocial and economy, personal hygiene, Health Belief ModelCorrespondence: Hervindita Dinda Siswandwika. Masters Program in Public Health, Sebelas Maret University. Email: vindy_7@yahoo.com. Mobile: +6282136242777.Journal of Health Promotion and Behavior (2017), 2(1): 1-14https://doi.org/10.26911/thejhpb.2017.02.01.01
under poor condition achieve 72.5 million in cities (18.20%) and villages (40.00%). Among the members of ASEAN and SEAR, Indonesia has occupied the bottom four in terms of sufficient sanitation facilities. Even in the provinces with good performance (Central Java and Yogyakarta Special Region), one of three households does not have access to clean water (UNICEF Indonesia, 2012). A crucial problem in the domain of sanitation and hygiene has been the behavior of defecating inappropriately (BABS, buang air besar sembarang). Households that do not have defecation facilities have been 17.78% (Kemenkes RI, 2013).
Death due to waterborne disease achieves 3.4 million people/ year. From all deaths due to the poor quality of water and sanitation, diarrhea has been the biggest cause with 1.4 million people/year as its mortality figure. Diarrhea has also been the first cause of death among babies (31.40%) and toddlers (25.20%) and has been the fourth cause of death in all age group (13.20%) (WSP, 2008;WHO and UNICEF, 2014;Anup, 2012).
The figures of diarrhea outbreak in households that use open-air well are 34.00% higher than those of households that use water-pipe. Then, the figures of diarrhea outbreak are 66.00% higher among the families that defecate in openair area thanthose that defecate in the family closet and septic tank (UNICEF Indonesia, 2012). In 2015, there were 18 times of diarrhea extraordinary case in 18 provinces and 18 regencies/cities and the number of diarrhea patients in these cases was 1,213 people with mortality rate 30 people. The Diarrhea CFR during this extraordinary case drastically improved approximately 2.47% (Kemenkes RI, 2016). The cause of high figure on the environmental-based contagious disease has been the poor hygienic behavior and quality of communal life (Dreielbis et al., 2003). According to ISSDP (2015), 47.50% of water that has been consumed contains E-coli and 47.00% of community members still defecate in open-air area.
Indonesia is the second country in which the practice of inappropriate defecation has occurred (12.90%) after India (58.00%) (WHO, 2014). In villages on the Province of Central Java, households that have sufficient sanitation (healthy closet) have been decreasing from 77.00% in 2014 into 67.20% in 2015(Dinkes Jateng, 2016. In 2015, there has been 7,596 cases of diarrhea on the Regency of Sragen. The use of closet as defecation facility is still low. The highest practice of inappropriate defecation has been found in Mondokan (done by 5,164 people, 42.00%), Sambungmacan (done by 2,070 people, 15.00%) and Gesi (978 people, 15.00%) (Dinkes Sragen, 2016).
The theory of Health Belief Model (BFM) that was developed by Rosenstock (1966) explains and predicts the possibility of associating behavioral changes to the pattern of certain belief or feelings (Hayden, 2010;Nelas et al., 2015). A previous study by Dahal et al. (2014) andSchmidlin T et al., (2014) stated that knowledge, practice, economy, social-culture and belief of an individual have been related to hygienic behaviors. Therefore, this study then aimed at explained the influence of bio-psycho-social and economic determinants regarding individual hygiene toward the behaviors of Diarrhea prevention by implementing the theory of Health Belief Model (BFM).
SUBJECTS AND METHOD
This quantitative study made use of observational analytical design with case-control framework. The study was conducted in the Regency of Srafgen, the Province of Central Java, from January until March 2017. The population in this study was the people in the Regency of Sragen. The sample was gathered through purposive sampling technique and fixed disease sampling. The total sample was 150 subjects who were divided into the case group, namely 50 Diarrhea patients within the last one month, and the control group, namely 100 Diarrhea patients that had been gathered from the working region of Mondokan Community Health Center, Sambungmacan Community Health Center and Gesi Community Health Center in the Regency of Sragen. The instrument that the researchers applied in measuring the variables on perceived vulnerability, perceived severity/ seriousness, perceived threats, perceived benefits, perceived barriers, cues action and selfefficacy was Health Belief Model (HBM) questionnaire. The measurement scale was continuous; for the sake of analysis and description importance, the continuous data would be changed into the categorical data if the score were low (< mean) or were KLJK • PHDQ Perceived vulnerability was the subjective perception regarding the risk of being affected to a disease, which referred to the risk of an individual suffering from certain disease. The greater the risk an individual perceived, the greater the possibility to be involved in risk-decreasing actions. Then, perception of seriousness was the belief regarding the level of disease seriousness or severity level (including evaluation of medical, clinical and social consequence that might appear) that an individual might have difficulties due to the disease that he or she had and the fact that this disease might bring about negative impact to his or her life in general.
Perceived threats were the encouragement to perform prevention and medication toward a disease due to the perceived vulnerability and severity/ seriousness. Too enormous threat would cause fear that inhibited healthy behaviors display because an individual had been helpless in combating his or her disease. Furthermore, perceived benefits were the perception regarding the value of new behavior usefulness in decreasing the risk of being affected by disease both the physical and the mental usefulness. An individual would be inclined to adopt healthy behaviors due to his or her belied that these behaviors had healthy usefulness. Next, perceived barriers were the negative consequence that occurred when an individual took a new action physically, psychologically or financially. In relation to the behaviors that had been adopted, an individual should believe that the benefits that he or she retrieved would be greater than the consequence of continuing his or her old behaviors.
Cues to action was the factors that encouraged an individual to adopt diseasepreventing behaviors and it might be external and internal factors such as: mass media, suggestions, personal or familial experiences and more regarding healthy behaviors. Self-efficacy referred to the belief/the self-efficacy in performing healthy behaviors. If an individual believed the usefulness of new behaviors, but he or she thought that he or she were inhibited to do these new behaviors, then these new behaviors would not be performed. Next, Diarrheapreventing behaviors referred to the healthy behaviors that individual performed in order to prevent him or her from getting affected by Diarrhea regarding healthy closet use, clean water facilities availability and use and hand-washing habit.
Previously the researchers had conducted a face validity test and a content validity test by Pearson product moment correlation technique. Then, the researchers performed a reliability test by Alpha Cronbach technique. The validity and reliability was conducted by involving 20 community members who shared similar characteristics from different locations.
The secondary data were taken from medical records and patient visit books in the community health centers. The primary data were attained from the results of direct REVHUYDWLRQ WRZDUG WKH VXEMHFWV ¶ VHWWOHPHQW and questionnaire. The researchers then performed a multivariate analysis by Path analysis with IBM SPSS AMOS 22 software in order to test the relationship between the exogenous variables (perceived vulnerability, perceived severity/ seriousness, perceived benefits, perceived barriers, cues to action and self-efficacy) and the endogenous variables (perceived threats and individual hygiene on Diarrhea preventing behaviors).
RESULTS
In this section, the researchers discuss the characteristics of the subjects and the results of path analysis. Table 1 explains that from 150 subjects there are 96 female respondents. The average age of the subjects, both for the case group and the control group, is 18-40 years old. Most of the subjects have been married. In terms of educational characteristics, the educational background for most of the subjects in the case group is under senior high school (62.00%) while the educational background for most of the subjects in the control group is senior high school and above (71.00%).
According to the results of the study, 24.00% of the subjects in the case group are working as farming labors and 21.00% of the subjects in the control group are working as housewives and entrepreneurs. 92.00% of the subjects in the case group earn under-regional minimum wage income namely IDR 1,422,585. The distribution of subject characteristics based on the number of family head shows that most of the family members who live in the same house for the case group have been 4 to 5 people (28.00%, while most of the family members who live in the same house for the control group have been 5 people (36.00%).
The description on the availability of sufficient sanitation is as follows: 54.00% sufficient sanitation has not been available in the case group, while 87.00% sufficient sanitation has been available in the control group. Path analysis is conducted in order to identify the size on the influence of a variable, both the direct and the indirect influence. The size of the independent variable influence toward the dependent variable is referred to path coefficient. On the contrary, path coefficient itself does not have any unit; therefore, the researchers might conclude that the greater the path coefficient the greater the influence that the variable results. The inter-dependent and independent variable relationship will be established through the mediator variable and then will be analyzed by means of path analysis model.
This study consists of six exogenous variables namely perceived vulnerability, disease severity, benefit, barriers, cues for taking action, and self-efficacy. Then, the intervening variable will be the variable that has been influenced or that had influenced other variables, in this case the perceived threats. On the other hand, the endogenous variable in this study will be individual hygiene regarding Diarrhea preventing-behaviors. There are also several observed variables namely perceived vulnerability, severity, threat, barriers, cues to action, self-efficacy and individual hygiene regarding Diarrhea prevention. The data are processed by IBM SPSS AMOS 22 and the results are displayed in Picture 1. The results on the degree of freedom (df) have been 4 which implies that the data had been over identified or the path analysis might be conducted. Picture 1 shows the structural model after estimation has been conducted. Table 2 shows the indicators of suitability between the path analysis model and the presence of goodness of fit measure and from these indicators the researchers attain the results of CMIN fit index namely 4.604 • DQG 506($= " 0.05). These results imply that this empirical model has met the criteria that have been stipulated and has been declared in accordance to the empirical data.
According to the results of path analysis in Table 2, the researchers find the diarrhea-preventing behaviors influenced by perceived severity/ seriousness, perceiv-ed threats, perceived benefits, perceived barriers, cues to action, and self-efficacy. 1. There has been indirectly positive influence between perceived vulnerability and Diarrhea-preventing behaviors through perceived threats (b= 0.55; p < 0.001); this relationship is significant. 2. There has been directly positive influence (b= 0.26; p< 0.001) and indirect (b= 0.34; p< 0.001) between perceived severity/ seriousness and Diarrhea-preventing behaviors through perceived threats; this relationship is significant. 3. There has been indirectly positive influence between perceived threats and Diar-
Picture 1. Structural model of path analysis with estimate
rhea-preventing behaviors (b= 0.29; p < 0.001); this relationship is significant. 4. There has been directly positive influence (b= 0.21; p < 0.001) and indirect (b= 0.12; p < 0.025) between perceived benefits and Diarrhea-preventing behaviors through perceived threats; this relationship is significant. 5. There has been directly negative influence between perceived barriers and Diarrhea-preventing behaviors (b= 0.12; p= 0.032); this relationship is significant. 6. There has been indirectly positive influence between cues to action and Diarrhea-preventing behaviors (b= 0.17; p= 0.003); this relationship is significant. 7. There has been directly positive influence between self-efficacy and Diarrheapreventing behaviors (b= 0.28; p< 0.001); this relationship is significant.
The influence of perceived vulnerability toward Diarrhea-preventing behaviors
The results of the study show that there has been indirect influence from perceived vulnerability toward Diarrhea-preventing behaviors through perceived threats (b= 0.55; p < 0.001). This implies that an individual who perceives that his or her body is vulnerable to Diarrhea will have greater possibility to adopt Diarrhea-preventing behaviors thanan individual whose perception is that his or her body is not vulnerable to Diarrhea.
An individual who refuses to adopt healthy behaviors have smaller possibility to belief that the behaviors of individual hygiene are very necessary to protect the family health thanan individual who adopts Diarrhea-preventing behaviors. If an individual perceives that he or she has a risk of getting infected by a disease, then this in-dividual will perform safe behaviors and disease-preventing efforts.
According to Rosenstock (1982) in Orji et al. (2012), people who perceive that they easily get affected by a disease will be easier to feel threatened. This threat will encourage an individual to perform diseasepreventing or disease-medicating behaviors. In this study, the researchers still find the respondents who feel that they are vulnerable to Diarrhea (34.70%). If an individual perceives that he or she is not vulnerable to a disease, then he or she should be provided with more intensive stimuli so that this individual will display necessary responses for his or her health. This sense of being invulnerable might be caused by the minimum knowledge regarding the danger of the disease itself (Vega, 2013). There should be efforts toward improving the knowledge through both individual and communal health education.
The influence of perceived disease severity toward Diarrhea-preventing behaviors
The results of the study show that there has been direct (b= 0.26; p < 0.001) and indirect (b= 0.34; p < 0.001) influence from perceived disease severity toward Diarrheapreventing behaviors through perceived threats (b= 0.34; p < 0.001). This relationship implies that an individual who strongly perceives that Diarrhea is a serious disease will have greater possibility to adopt Diarrhea-preventing behaviors than an individual who perceives that Diarrhea is not a serious disease and does not threat his or her health.
The perceived severity/ seriousness/ seriousQHVV UHIHUV WR DQ LQGLYLGXDO ¶V IHHOLQJV regarding the severity of a disease and includes evaluation toward clinical and medical consequence (death, disability and pain) with social consequence that might appear (impact toward occupation, family life and social relationship). Individual absorbs new behaviors; at the begining he or she should understand first the meaning or the benefit of these new behaviors for himself or herself and for his or her family (Hayden, 2010;Sigler, 2015). The behaviors that have been based on knowledge will last longer than the ones that have not been based on knowledge. Furthermore, the knowledge itself will create a mental response in the form of an attitude toward the object that has been known. The attitude that has been attained through experience creates direct influence toward the following healthy behaviors (Vega, 2013;Sigler, 2015).
The data in this study show that there have been some community members who perceive that Diarrhea is not a serious disease and does not threaten their health (42.70%). The reason is that individual hygiene has not been their main option in the Diarrhea-preventing behaviors due to the minimum knowledge that these members have. As an effort to improve the perceived severity/ seriousness, there should be further information regarding the danger of Diarrhea. If the perceived severity/ seriousness has improved, then the preventing behaviors will have been better as well (Jasper & Bartram, 2012).
The influence of perceived threats toward Diarrhea-preventing behaviors
The results of this study show that there has been direct influence from perceived threats toward Diarrhea-preventing behaviors (b= 0.29; p < 0.001). This implies that an individual who perceives that Diarrhea is a disease that threatens his or her health will have greater possibility to adopt the preventing behaviors than the one who perceives that Diarrhea is not a threatening disease.
According to Rosenstock (1982) in Burke (2013), individual view regarding the severity of a disease (perceived seriousness), namely the risk and the difficulty that might be experienced due to suffering from the disease, will encourage the individual to feel easily affected or to be vulnerably affected by the disease. On the other hand, individual perception regarding the possibility of getting affected by Diarrhea (perceived susceptibility) encourages them to feel easily threatened (perceived threats). The results of this study are in accordance to a study by Schmidlin et al. (2013) that perceived vulnerability and perceived severity/ seriousness causes higher perceived threat. This threat encourages individual to adopt disease-preventing or medicating action.
According to the theory of HBM, healthy behaviors might appear and be maintained due to the commitment to perform healthy behaviors and the presence of fear toward the threats of a disease. Individual commitment is influenced by behavioral specific cognitions and affect that include namely: perceived benefits, perceived barriers, self-efficacy, interpersonal influence and perceived threat of disease (Fauziah et al., 2015).
In this study, the researchers still find that there are individuals who consider diarrhea as non-health threatening disease (34.70%). The reason is that the understanding with regards to the threats that a disease might have will be different in each individual, depending on his or her medical knowledge regarding the disease. It will be better if the individual is provided with health education in order to improve the FRPPXQLW\ PHPEHUV ¶ NQRZOHGJH UHJDUGLQJ individual hygiene so that perceived threats might be improved and might motivate the community members to pursue healthy behaviors.
The influence of perceived benefits toward Diarrhea-preventing behaviors
The results of this study show that there has been direct (b= 0.21; p < 0.001) and indirect (b= 0.12; p= 0.025). This implies that an individual who have perceives that individual hygiene is useful will have greater opportunity to adopt Diarrhea-preventing behaviors than the one who does not have perception that individual hygiene is not useful.
Perceived benefits DUH DQ LQGLYLGXDO ¶V belief in taking disease-preventing actions, disease-protecting actions and diseasesmedicating actions in order to decrease his or her vulnerability toward the disease or the disease severity as well as an indi-YLGXDO ¶V FRQILGHQFH RQ WKH HIIH[FWLYHQVV RI their actions in decreasing the risks caused by the disease (Smith et al., 2011;William et al., 2015).
The results of this study show that an individual will perform Diarrhea-preventing actions if here or she feels that the actions are useful and vice versa. The researchers still find that 24.70% of community members in this study do not adopt individual hygiene because they do not perceive the usefulness of healthy behaviors. Healthy life is increasing needs and demand although in the reality the health degree of Indonesian people has not meet the expectation (Priyoto, 2014).
Individual hygiene does not only provide prevention toward certain disease in the family health but also wider positive impact in order to prevent the disease outbreak to other people. Therefore, paUHQWV ¶ knowledge and attitude are very important in order to understand the benefits of individual hygiene toward Diarrhea prevention and to teach clean and healthy behaviors toward their children as early as possible (Brown et al., 2013).
The influence of perceived barriers toward Diarrhea-preventing behaviors
The results of this study show that there has been direct influence from perceived barriers toward Diarrhea-preventing behaviors (b= 0.12; p= 0.032). This implies that an individual who perceives many barrierss during his or her absorption of hygienic behaviors will have smaller possibility to adopt Diarrhea-preventing behaviors than the one who does not perceives any barrierss while performing his or her diseasepreventing behaviors.
Perceived barriers are a negative aspect that potentially inhibits the performance of health efforts (side effects, uncertainty) or the barriers that have been perceived to influence the recommendation of new behaviors introduction (anxious, incompatible, unhappy and nervous) (Taylor, 2007;Romano, 2014).
The results of this study are in accordance to a study by Smith et al., which state that an individual who does not perform individual hygiene has greater possibility to consider that adopting these behaviors might be costly. Bariers in performing disease-preventing behaviors include: cost, culture and difficulty in providing facilities (sufficient facilities and clean water are not available) (Asamani, 2011;Freeman et al., 2014). The awareness toward barriers that might appear needs anticipation and needs to be calculated within the healthy behaviors of an individual both as prevention and as preliminary handling of the health problems that he or she has. In a healthy behavior, barriers that have occured might be imaginary or real.
Barriers in this study are expensive cost, unavailable sanitation facilties and norms/ cultures. Perceived barriers are a significant element in determining whether there has been any changes or not. In rela-tion to new behaviors that should be adopted, an individual should believe that the benefits of these new behaviors are greater than the consequences of continuing the old behaviors. Zetu et al. (2013) stated that there has been relationship between perceived barriers and disease-preventing behaviors in which the problems of cost becomes a barrier in pursuing healthy behaviors. Last but not the least, the results of this study are also in accordance to the theory of HBM which explains that the perceived barriers might act as a barrier in performing the recommended behaviors (Romano, 2014).
The results of description on the perceived barriers show that 26.00% subjects feel the barries to turn their negative behaviors into the positive ones. For example, one of these subjects used to be taught by their parents that they should perform healthy behavior by defecating in the river but not they should adopt a new healthy behavior by defecating in the water closet. In order to change this behavior, he should believe that barriers and consequences of individual hygiene behaviors are smaller than continuing the old behaviors. In order to change this behavior as well, an understanding toward the differences between old and new behaviors should be disseminated along with the impacts that might occur due to the outbreak of a disease in their settlement.
The influence of cues to action toward Diarrhea-preventing behaviors
The results of the study show that there has been direct influence from cues to action toward Diarrhea-preventing behaviors (b= 0.17; p= 0.003). This implies that an individual who have cues to action from medical staff, medical cadres, relatives and neighbors regarding the importance of individual hygiene for his or her health will have greater possibility to adopt Diarrheapreventing behaviors than the one who does not have cues to action.
Cues to action is the action-triggering factors that might come from the individual alone (the appearance of the symptoms of certain diseases) or from the external aspects RWKHUV ¶ VXJJHVWLRQV KHDOWK FDPpaign, getting affected by similar disease that colleagues or family members have). Cues to action is a factor that accelerates an individual to take action or take real action for the sake of his or her health (Clasen et al., 2007;Asamani, 2011;Bakhtari, 2012).
Cues to action involves ilness of a family member, media reports (Dreibelbis et DO PDVV PHGLD FDPSDLJQ RWKHUV ¶ sugJHVWLRQV DQG PHGLFDO VWDII ¶V VXJJHVWLRQ (Sigler et al., 2015). The presence of cues, education, symptoms or information media (cues to action) might influence an individual in terms of the danger of a disease; as a result, he or she will take action. Most of the stimuli from the external aspects of an individual comes as perceived objects. The perceived objects are categorized into two parts namely non-human objects and human objects. If the perceived objects are human then the perceived individual will influence the perceiving individual (Priyoto, 2014).
Within the theory of HBM, in order to decrease the sense of being threatened, there should be an offer of action alternative by medical staff (Rosenstock, 1982;Burke, 2013). Whether the individual approves the proposed alternative or not de-SHQGV RQ WKH LQGLYLGXDO ¶V YLHZ UHJDUGLQJ benefits and barriers of the alternative implementation. The individual will consider whether the alternative might decrease the threat of getting affected by a disease along with its negative impact. On the contrary, the negative consequence of the proposed action alternative (problems of cost, shame, fear toward pain and alike) often causes the individual to avoid implementing the recommended alternative (Nelas et al., 2015).
In this study, the researchers still find community members who have not got cues to action (26.70%); as a result, they have not understood the danger of Diarrhea and the importance of individual hygiene behaviors. The reason behind this finding is the low access that medical staff has to this region and the different level of individual socialization in each region, depending on their culture regarding illness and disease. It will be better if there is counselling that might be held every month in remote areas, especially in the riverbanks of Bengawan Solo, regarding individual hygiene behaviors in relation to defecating in water FORVHW VR WKDW WKH FRPPXQLW\ PHPEHUV ¶ paradigm might change and the disease that might be sourced from unhealthy water and behaviors might be controlled.
The influence of self-efficacy toward Diarrhea-preventing behaviors
The results of this study show that there has been direct influence from self-efficacy toward Diarrhea-preventing behaviors (b = 0.28; p < 0.001). This implies that an individual who has strong self-efficacy (selfcapacity) in performing individual hygiene behaviors will have greater possibility to adopt Diarrhea-preventing behaviors than the one who has weak self-efficacy.
Strong self-efficacy makes an individual to put aside barriers and to strive performing his or her role optimally. Family support is one of the factors that influ-HQFH DQ LQGLYLGXDO ¶V EHKDYLRUV LQ WDNLQJ right decisions. The presence of family support might encourage behavioral capacity and willingness (Freeman et al., 2014). High self-efficacy might cause an individual to endure longer in more difficult problems, to throw away ineffective problem solving activities, to be quicker in selecting strategies, to review any mistakes in their work, to prepare themselves toward more challengeing objectives to spend lesser time in being anxious toward the consequences of failure (WSP, 2008). Zetu et al. (2013) suggested that self-efficacy hasbeen related to a belief that an individual has the capacity to performing expected positive actions.
Behaviors are determined by motive and confidence regardless whether the motive or the confidence is in accordance to WKH UHDOLW\ RU WR WKH RWKHUV ¶ YLHZ RU QRW UHgarding what is the best for the individual. This opinion/confidence might be in accordance to the reality, but might also be different to the reality as having been seen by other people. Although it might be different, according to Rosenstock (1982) it has been this subjective opinion that instead becomes the key to perform (or not to perform) a healthy action. This implies that an individual will perform medicating actions if he or she is truly threatened by the disease. If he or she is not confident with his or her capacity in performing the behaviors, then this individual might do nothing.
In this study, the researchers find 19.30% subjects who are still not confident to their self-efficacy in performing preventing behaviors. The inconfidence on their self-efficacy to provide sufficient facility makes them not to do the recommended behaviors (Weaver et al., 2016). It will be better if government optimizes the aid on sufficient sanitation facilities toward community members who have problems of cost so that each member has sufficient sanitation in order to support the change of their behaviors. In addition, health education regarding the use of clean water closet should be improved in order to change the paradigm of villagers regarding the danger that they have if they keep continuing the old habits in order that the number of Diarrhea cases might be decreased. The confidence on self-efficacy determines how an individual behave. An individual will not try to do something unless if the individual thinks that he or she can do it. | 2019-03-17T13:08:20.061Z | 2017-05-18T00:00:00.000 | {
"year": 2017,
"sha1": "2893a2a93f52fcf5fae187ac16edc6554c92b4f6",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.thejhpb.com/index.php?journal=thejhpb&op=download&page=article&path[]=33&path[]=36",
"oa_status": "GOLD",
"pdf_src": "Neliti",
"pdf_hash": "a95fd7f624f5f6af808e3f0dfdd683c814e29420",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
242757446 | pes2o/s2orc | v3-fos-license | A further generalisation of sums of higher derivatives of the Riemann Zeta Function
We prove an asymptotic for the sum of $\zeta^{(n)} (\rho)X^{\rho}$ where $\zeta^{(n)} (s)$ denotes the $n$th derivative of the Riemann zeta function, $X$ is a positive real and $\rho$ denotes a non-trivial zero of the Riemann zeta function. The sum is over the zeros with imaginary parts up to a height $T$, as $T \rightarrow \infty$. We also specify what the asymptotic formula becomes when $X$ is a positive integer, highlighting the differences in the asymptotic expansions as $X$ changes its arithmetic nature.
Introduction
Let ζ(s) denote the Riemann zeta function and let ρ = β + iγ be a non-trivial zero of ζ(s). The starting point for the motivation of this paper is Landau's theorem [14] which states that for any X > 1 we have Here and throughout this article, Λ(X) denotes the von Mangoldt function given by Λ(X) = log p if X = p k for some prime p and some integer k ≥ 1 0 otherwise. (1.2) Formula (1.1) is proven by estimating the integral 1 2πi R ζ ′ ζ (s)X s ds for a suitably chosen rectangle enclosing those zeros ρ for which 0 < γ ≤ T .
Gonek [8] has generalised Landau's formula to be uniform in both variables. He showed that for X, T > 1, where X denotes the distance from X to the nearest prime power other than X itself. Note then that (1.1) follows from (1.3) provided that X is fixed and T → ∞. The large number of error terms are explained in Gonek [8] which are all due to the different behaviour of the sum in different X ranges. He then also notes that when X = n ∈ N and T ≫ n, then the last two error terms of (1.3) are absorbed by the first 1 error term and so (1.3) becomes 0<γ≤T n ρ = − T 2π Λ(n) + O(n log(2nT ) log log(3n)). (1.4) The differences in formula (1.3) and formula (1.4) highlights the differences in behaviour of sums of this form when X is no longer an arbitrary positive real but is instead a positive integer, a theme that will reoccur several times in this paper. Fujii has generalised Gonek's result (assuming RH) in [3] and [4] to find the sub-leading and sub-sub-leading behaviour. This is given by 2πi log X +X 1 2 +iT 1 π arg ζ 1 2 + iT +O log T (log log T ) 2 .
(1.5) Fujii has also shown in [5] (1.6) where E(T ) is given explicitly both unconditionally and assuming RH and where C 0 and C 1 are the constants in the expansion Fujii forms a hybrid of the sums (1.1) and (1.6) in his paper [6]. He gives the following explicit formula for a fixed positive real number X, where ∆(X) is defined by throughout this article. Fujii then narrows down X to be a positive integer and gives an asymptotic formula for this, again exhibiting the behaviour that Gonek noted in [8], about how the arithmetic nature of X can affect the asymptotic expansion. Specifically, he finds for X ≥ 1, where C 0 , C 1 are given above and C is a positive constant. Clearly setting X = 1 then gives (1.6).
A generalisation of Fujii's result has been given in Jakhlouti and Mazhouda [12] to give an analogue of (1.7) for Dirichlet L-functions. This extension is taken further to an asymptotic formula at the L-function's a-points, for any fixed complex number a. We state the analogue to (1.7) which is their Lemma 3 with a = 0, as T → ∞ for a fixed positive number X and where χ is a primitive character mod q. Fixing q = 1 gives (1.7) and setting q = 1 and X = 1 gives (1.6).
Other ideas have grown from (1.1). For example, Ford and Zaharescu [2] start with Landau's theorem (1.1) and investigate the distribution of the fractional parts of αγ, where α is a fixed non-zero real number. This idea is then expanded upon by Ford, Soundararajan and Zaharescu in [1]. Further examples of results that start with (1.1) are found in papers [9] and [13].
Statement of the Results
Let X be a fixed positive number. Let ζ (n) (s) denote the nth derivative of the Riemann zeta function ζ(s) and let ρ = β + iγ be a non-trivial zero of ζ(s). Write s = σ + it with σ, t ∈ R. We suppose that T > T 0 and that T is not the imaginary part of the zeros of ζ(s). We further assume that |T − γ| ≫ 1 log T . This restriction has no effect on the final result. Then we have the following result which is an analogue of Fujii's (1.7). Theorem 1. For X a fixed positive real number, we have Remark. Setting n = 1 in the theorem recovers Fujii's result in (1.7).
When we restrict X to being a positive integer we obtain a special case of the above results. It is evident from the statement of the following corollary that we see the changing behaviour of our asymptotic expansions depending on whether X > 0 is any real number and when X ≥ 1 is an integer.
with C is a positive constant. If we assume the Riemann Hypothesis, then Further, the C j are the coefficients in the Laurent expansion for ζ(s) about s = 1, given by and the A j are the coefficients in the Laurent expansion for ζ ′ (s) ζ(s) about s = 1, given by Remark. Note that the A j are related to the C j by the following recursive formula, as shown in Israilov [11], given by Remark. Setting n = 1 in the corollary recovers Fujii's result in (1.9). Setting n = 1 and X = 1 recovers Fujii's result in (1.6). Setting X = 1 recovers our result from [10] for general n, given later on in this paper as Theorem 2.
Outline of the Paper
So far we have described the motivation for studying asymptotic expansions of the sums given in Section 1. We have stated the main results in Section 2 that we will prove in the following sections.
In Section 4 we will recall some basic facts about the Riemann zeta function ζ(s) and the Riemann xi function ξ(s). One of the main results that we will need to state is the Theorem from Hughes and Pearce-Crump [10] that will be essential in proving Corollary 1.
In Section 5 we will use the tools from Section 4 to prove Theorem 1. The integral we use to prove this theorem is given by where ξ(s) is the Riemann xi function, ζ (n) (s) denotes the nth derivative of ζ(s) and R denotes the rectangular positively oriented contour with the vertices are c + i, c + iT, 1 − c + it, 1 − c + i connected in this order and c = 1 + 1 log T . The non-trivial zeros of ζ(s) up to a height T are contained within R and so by Cauchy's Theorem the integral represents the summation We split this section up into several subsections, each corresponding to different parts of the contour that we will be integrating over, to show that most of the contribution comes from the left-hand side of the contour, while the other sides mostly only contribute to the error term.
Finally in Section 6 we will prove Corollary 1 which will highlight the differences between the general case proved in Section 5 for any positive number X and the case when X is a positive integer. As described above we will use a result from [10] to do most of the work here. This will again highlight the observation of Gonek's in [8] that the asymptotic formulae tend to change quite dramatically depending on the arithmetic nature of X.
Preliminary Lemmas
In this section we recall some basic information about ζ(s) and ξ(s), as well as recalling some results from other papers that will be useful in our proof. Any facts that are not explicitly referenced in this section can be found in any good text about the Riemann zeta function, for example they can be found in Titchmarsh [16].
Firstly recall that the functional equation for ζ(s) is given by and Γ(s) is the Gamma function throughout this paper. We state a more general functional equation for ζ (n) (s) that is proved using the functional equation for ζ(s) and the Leibniz product rule.
Lemma 1. The general functional equation for ζ (n) (s) is given by the following formula We now recall that for σ > 1 we may write both ζ (n) (s) and ζ ′ ζ (s) in terms of their Dirichlet series which are given by where Λ(r) is the von Mangoldt function defined in (1.2). We will also need the following result that we proved in [10,Sect.5] for the integral along the top and the bottom of our contour.
For the Riemann xi function we write so the functional equation for ξ(s) is given by We now observe that where we have written and for | arg s| < π − δ with arbitrarily fixed positive δ and for |s| ≥ 1 2 we have Combining these two observations, we have the following Lemma.
Lemma 3. With the conditions written above, we have Finally to prove Corollary 1 we will need the main result from [10] which we state in fullness here for ease of reference.
Theorem 2. With the setup as stated in our main results Section 2 and with the C j and A j coefficients defined in Corollary 1, we have where unconditionally,
Proof of Theorem 1
Let X be a fixed positive real. We write s = σ + it with σ, t ∈ R and ρ = β + iγ for a non-trivial zero of the Riemann zeta function ζ(s). Suppose T > T 0 and T is not the imaginary part of the zeros of ζ(s) and further that |T − γ| ≫ 1 log T where γ is the imaginary part of any zero ρ. This restriction on T is harmless within our remainder terms.
Set c = 1 + 1 log T and consider the integral where ξ(s) is the Riemann xi function, ζ (n) (s) denotes the nth derivative of ζ(s) and R denotes the rectangular positively oriented contour with vertices given by c + i, c + iT, 1 − c + it, 1 − c + i, connected in this order. By Cauchy's Theorem, We now need to evaluate I in another way to obtain our asymptotic expansion. Decomposing the integral (5.1) along the sides of the contour as
Bounding I B and I T .
Notice that by our choice of T we may bound I B and I T trivially within our error. To do this, recall that for −1 ≤ σ ≤ 2 and with our general assumptions we have from Gonek [7, Sect.2, p.127] that By Lemma 2, we have
Evaluating I R .
Writing s = c + it and using (4.5) we may write Using the Dirichlet series (4.2),(4.3) and Lemma 3 we may rewrite this as Consider I R,1 first. We have (with ∆(X) defined as in (1.8)), Firstly, Next, integrating by parts and summing we obtain If 0 < X < 1 we can do slightly better than this error term. Recombining, we have Now consider I R,2 . We have Then and as with the case for I R,1,2 above, we have where again the error can be improved slightly for 0 < X < 1. Combining, we have X=mr log n m Λ(r) + O(log n+2 T ). (5.5) Finally, we may combine (5.4) and (5.5) to obtain I R , given by Finally we evaluate I L , which is where most of the contribution to the asymptotic expansion comes from. Using Lemma 1 we have By complex conjugation we get where the second line follows from Lemma 3.
We now split this integral and evaluate each part separately.
The key component to this section of the proof here is the method of stationary phase. Applications of this method to these types of problems can be in Gonek [7,Sect.4,p.131], in Levinson [15] and in Jakhlouti and Mazhouda [12,Sect.2,p.13], amongst other places.
In an entirely analogous way to the proof that Gonek writes in [7], we are able to prove the following result which we need to evaluate the integrals J 1 , J 2 , J 3 .
Lemma 4. Let X be a fixed positive real. Let {b m } ∞ m=1 be a sequence of complex numbers such that for any ǫ > 0, b m ≪ m ǫ . Let c > 1 and let k ≥ 0 be an integer. Then for T sufficiently large, we have 1 2π Applying Lemma 4 to each of the summations J k , k = 1, 2, 3, we have Similarly, Finally, Recombining these, we have Taking complex conjugates, Combining this asymptotic expansion with our observation that I = 0<γ≤T ζ (n) (ρ)X ρ in (5.2) completes the proof of Theorem 1.
Proof of Corollary 1
We may rewrite Theorem 1 in a slightly different way as follows. where ∆(X) is given in Theorem 1.
Proof. This follows from Theorem 1 by using the binomial expansion on log n (rX) in the last summation in the braces in Theorem 1.
Remark. The advantage to rewriting Theorem 1 in the form of Corollary 2 is that none of the summations involving exponentials have any reliance on powers of log X.
When X ≥ 1 and X ∈ Z, notice that Then in an entirely analogous way to that done in Hughes and Pearce-Crump [10], we have and C > 0 is a constant. Multiplying S by (−1) k+1 gives the summation that we were originally looking for. Substituting these expressions into the asymptotic expansion from Corollary 2 gives 0<γ≤T ζ (n) (ρ)X ρ = (−1) n T 2π log n X 1 2 log T 2π − 1 2 + πi 4 − mr=X Λ(r) log n m + log n X 1 2 log X − πi 4 T 2π | 2021-11-05T01:15:44.610Z | 2021-11-04T00:00:00.000 | {
"year": 2021,
"sha1": "cfbc23c5cc89e9fd1da07943000931b4d51c6432",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cfbc23c5cc89e9fd1da07943000931b4d51c6432",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119153958 | pes2o/s2orc | v3-fos-license | An asymptotic preserving mixed finite element method for wave propagation in pipelines
We consider a parameter dependent family of damped hyperbolic equations with interesting limit behavior: the system approaches steady states exponentially fast and for parameter to zero the solutions converge to that of a parabolic limit problem. We establish sharp estimates and elaborate their dependence on the model parameters. For the numerical approximation we then consider a mixed finite element method in space together with a Runge-Kutta method in time. Due to the variational and dissipative nature of this approximation, the limit behavior of the infinite dimensional level is inherited almost automatically by the discrete problems. The resulting numerical method thus is asymptotic preserving in the parabolic limit and uniformly exponentially stable. These results are further shown to be independent of the discretization parameters. Numerical tests are presented for a simple model problem which illustrate that the derived estimates are sharp in general.
Introduction
Pipeline networks in gas or water supply systems are usually made up of rather long pipes and the time scales of interest are typically large as well. The propagation of pressure waves in such long pipes may then be described by a hyperbolic system ∂ t p ǫ + ∂ x m ǫ = 0 (1) together with appropriate initial and boundary conditions. Here p ǫ corresponds to the pressure, m ǫ to the momentum or mass flux, and a is a generalized friction coefficient which encodes information about the pipe diameter and roughness. This system can be derived by a parabolic rescaling t =tǫ 2 , x =xǫ of the physical space and time variablesx,t from the Euler equations or the shallow water equations under some simplifying assumptions [1,14] and ǫ can be assumed to be small.
Herbert Egger
Technische Universität Darmstadt, Germany, e-mail: egger@mathematik.tu-darmstadt.de Thomas Kugler Technische Universität Darmstadt, Germany, e-mail: kugler@mathematik.tu-darmstadt.de The parameter dependent hyperbolic problem (1)-(2) has an interesting limit behavior for long time t → ∞ and in the parabolic limit ǫ → 0 which has been studied intensively in the literature [1,11,10,12,15,16]. Many interesting results are available even for more general problems including the isentropic Euler equations with damping and rather general hyperbolic systems [3,13]. In this note, we contribute to this active research field by establishing the following theoretical results: (R1) For ǫ → 0, the solutions (p ǫ , m ǫ ) of (1)- (2) converge to the solution (p 0 , m 0 ) of the corresponding parabolic limit problem and with a constant C that is uniform in ǫ and independent of time t ≥ 0. (R2) Assume that the boundary values are kept constant. Then for any 0 ≤ ǫ ≤ 1 the solutions (p ǫ , m ǫ ) converge to the same steady state (p,m) and with constants C and γ > 0 that are independent of t ≥ 0 and ǫ.
Our proofs are based on careful energy estimates that explicitly take into account the dependence on the parameter ǫ. As a consequence, the results not only hold for single pipes but can be extended without much difficulty to pipeline networks. Due to the many important applications, the systematic approximation of parameter dependent hyperbolic problems and, in particular, the preservation of asymptotic stability have been investigated intensively as well [2,4,7,8,9]. For the discretization of the model problem (1)-(2) we here consider a mixed finite element method in space combined with an implicit Runge-Kutta time-stepping scheme. The resulting method can be shown to exactly conserve mass and to be slightly dissipative in energy, thus capturing the relevant physical behavior [5]. In this paper, we additionally establish the following properties: (R3) The scheme is asymptotic preserving, i.e., the solutions (p ǫ h,τ , m ǫ h,τ ) converge with ǫ → 0 to the solution (p 0 h,τ , m 0 h,τ ) of the parabolic limit problem, and with C independent of ǫ and of the discretization parameters h and τ . (R4) The method is uniformly exponentially stable, i.e., for constant boundary data the solutions (p ǫ h,τ , m ǫ h,τ ) converge towards steady state (p h ,m h ) and with C and γ > 0 independent of ǫ and the discretization parameters h, τ .
The numerical method is also well-balanced in the sense that it automatically provides a stable approximation (p h ,m h ) for the corresponding stationary problem. Since the proposed discretization strategy is of variational and dissipative nature, the above assertions can be proven with only slight modification of the energy arguments used on the continuous level. In summary, we thus obtain uniformly stable and accurate approximations for the parameter dependent problem (1)-(2) that capture all relevant physical and mathematical properties of the underlying system.
The remainder of this note is organized as follows: In Section 2, we prove the assertions (R1) and (R2) for the case of a single pipe. Section 3 is then concerned with the numerical approximation and the proof of assertions (R3) and (R4) for a single pipe. In Section 4, we briefly indicate how the results can be generalized with minor modifications to pipe networks. In Section 5, we discuss in detail a specific test problem and present numerical results that illustrate the sharpness of our estimates and also indicate directions for possible improvements.
Analysis on a single pipe
Let us start with describing in more detail the model problem under investigation. The pipe shall be represented by the unit interval and we consider We assume that 0 < a ≤ a(x) ≤ a and that the pressure at the boundary is given by For ease of presentation g 0 , g 1 are assumed to be independent of time here. Other boundary conditions could be considered with obvious modifications. From standard results of semigroup theory, one can easily deduce the following.
Note that only one single initial condition is required in the parabolic limit. By elementary arguments one can verify that the corresponding stationary problem is independent of ǫ and has a unique solution (p,m) ∈ H 1 (0, 1)×H 1 (0, 1) as well. Using standard energy arguments and the linearity of the time dependent and of the stationary problem, one can then establish the following assertions.
Lemma 2. Let (p ǫ , m ǫ ) and (p,m) denote solutions of (3)- (5) and (6)- (8), respectively. Then for any ǫ ≥ 0 and any t ≥ 0, there holds For ǫ > 0, one can additionally bound the time derivatives of (p ǫ , m ǫ ) by Here and below, · and (·, ·) denote the norm and the scalar product on L 2 (0, 1). In addition, the functions p ǫ , m ǫ are understood as functions of time with values in Hilbert spaces. The fact that the second estimate degenerates as ǫ → 0 resembles the fact that the second initial condition becomes superfluous in the parabolic limit.
Proof. Due to linearity of the problem, we may assume without loss of generality that g 0 = g 1 = 0 and hencep ≡m ≡ 0. From (3)-(4) we then get Using integration-by-parts for the second term in the last line, the homogeneous boundary conditions for p ǫ , and the lower bound for the parameter a, we get The first estimate now follows by integration with respect to time. Next assume that (p ǫ , m ǫ ) ∈ C 2 (R + ; L 2 (0, 1) × L 2 (0, 1)). Then by formal differentiation of the problem one can see, that the time derivative (∂ t p ǫ , ∂ t m ǫ ) also solves (3)-(5) with homogeneous boundary conditions. The previous estimate thus yields The differential equations (3) and (4) can be used to replace the terms on the right hand side which proves the second estimate for the case of smooth solutions. The general case finally follows by a density argument.
⊓ ⊔
A combination of these energy estimates allows us to provide a precise formulation and to prove the first assertion about solutions of the continuous problem.
Proof. Let r ǫ = p ǫ − p 0 and w ǫ = m ǫ − m 0 denote the differences between the solutions of the hyperbolic and the parabolic problem. Then by linearity of the equations, one can deduce that r ǫ = 0 at the boundary and that Applying similar arguments as in the proof of the previous lemma then leads to Multiplication by two and integration with respect to time further yields Since p ǫ and p 0 satisfy the same initial conditions, we have r ǫ (0) = 0, and the remaining integral on the right hand side can be estimated by Lemma 2.
⊓ ⊔
The estimates of Lemma 2 provide uniform bounds for the distance to steady state. A refined analysis reveals that in fact exponential convergence takes place.
which holds for all 0 ≤ s ≤ t and with some constants C, γ > 0 independent of ǫ.
Proof. Set τ = t/ǫ and σ = s/ǫ and define π ǫ (τ ) = p ǫ (t) and µ ǫ (τ ) = ǫm ǫ (t). Then by elementary calculations, one can see that The exponential convergence for this problem has been established in [5] and a direct application of Theorem 3.3 in [5] yields Using τ = t/ǫ and σ = s/ǫ and the definition of π ǫ and µ ǫ then directly yields the estimate for ǫ > 0. The result for ǫ = 0 follows directly but also from the uniformity of those for ǫ > 0 and the convergence to the parabolic limit. ⊓ ⊔
A mixed finite element Runge-Kutta scheme
For the discretization of problem (3) Recall that (·, ·) denotes the scalar product of L 2 (0, 1). Existence of a unique discrete solution (p ǫ h,τ , m ǫ h,τ ) to Problem 1 and of a unique solution (p h ,m h ) of the corresponding stationary problem can be deduced from the results in [5].
Lemma 3. For any ǫ ≥ 0, Problem 1 admits a unique solution (p ǫ h,τ , m ǫ h,τ ) and denotes the unique solution of the corresponding stationary problem. For ǫ > 0, we additionally have with constant C that is independent of ǫ and the discretization parameters h and τ .
Proof. Without loss of generality, we may set g 0 = g 1 = 0 and hencep h ≡m h ≡ 0. For ease of notation, let us abbreviate p k := p ǫ h,τ (t k ) and m k := m ǫ h,τ (t k ). Then by elementary calculations, one can verify that Using the discrete problem and the lower bounds for the parameter, we thus obtain The first estimate now follows by recursion and by noting that p 0 ≤ p 0 and m 0 ≤ m 0 , since the initial iterates were defined as L 2 orthogonal projections of the initial values onto the respective subspaces. By linearity of the problem, one can then deduce in a similar manner that Using the discrete problem for k = 1, we further get Using Young's inequality, the bounds for the parameter a, and the stability of the L 2 projection in the H 1 norm, we may conclude that which together with the energy estimate from above completes the proof. ⊓ ⊔ Similarly as on the continuous level, a combination of the previous estimates now immediately allows to show convergence of the solutions (p ǫ h,τ , m ǫ h,τ ) of the discrete hyperbolic problem to that of the discrete parabolic problem when ǫ → 0.
with constant C independent of ǫ and of the discretization parameters h and τ . Proof . Then by linearity of the discrete problem, one can see that for all q h ∈ Q h and v h ∈ V h and for all k ≥ 0. Testing with q h = w k and v h = m k and proceeding similarly as in the previous lemmas leads to the energy estimate The assertion now follows by noting that r 0 ≡ 0 and application of the second estimate of the previous lemma to estimate the last term in this expression. ⊓ ⊔ Similarly as on the continuous level, one can again prove uniform exponential convergence of discrete solutions to steady states.
for all 0 ≤ j ≤ k with constants C, γ > 0 that are independent of ǫ, h, and τ .
Proof. Using a rescaling like in the proof of Theorem 2, the result for ǫ > 0 can be deduced directly from Theorem 7.4 in [5]. The estimate for ǫ = 0 follows from the uniformity of the estimates and convergence to the parabolic limit. ⊓ ⊔
Extension to pipe networks
The results of the previous sections can be extended to the following class of hyperbolic problems on networks: Let G = (V, E) be a finite directed graph representing the topology of the network. On every single pipe e, the dynamics shall again be described by the linear damped hyperbolic system ∂ t p ǫ e + ∂ x m ǫ e = 0 (9) ǫ 2 ∂ t m ǫ e + ∂ x p ǫ e + a e m ǫ e = 0.
At any junction v of several pipes e ∈ E(v) of the network, we require that Here n e (v) takes the value minus or plus one, depending on whether the pipe e start or ends at the junction v. At the boundary vertices v of the network, we require Using the arguments developed in [6], all results stated in Theorem 1-4 hold verbatim also for the system (9)- (13). Details are left to the interested reader.
Numerical validation
We now illustrate our theoretical results by considering in detail a particular model problem.
For constant damping parameter a ≡ 1, initial data p 0 = sin(πx), m 0 ≡ 0, and boundary values g 0 = g 1 ≡ 0, the solution of problem (3)-(5) is given by and with parameter s(ǫ) = √ 1 − 4π 2 ǫ 2 . By Taylor expansion w.r.t. ǫ, we deduce that For ǫ = 0, we simply obtain p 0 (x, t) = e −π 2 t sin(πx) and m 0 (x, t) = πe −π 2 t cos(πx) and the steady state for this problem is given byp,m ≡ 0. From the explicit solution formulas, one can then immediately see that exponential convergence towards the steady state takes place with t → ∞ for all 0 ≤ ǫ ≤ 1 with a rate that is independent of ǫ which was the assertion of Theorem 2. In Table 1, we depict numerical results obtained with the numerical scheme discussed in Section 3. As predicted by Theorem 4, the exponential convergence towards steady state with t → ∞ is uniform in ǫ also for the discrete schemes. Mesh independence of the exponential decay rate was already demonstrated in [6].
Let us next have a closer look on the convergence to the parabolic limit. Using the analytical solution formulas and Taylor expansion w.r.t. ǫ, one can deduce that and This shows that p ǫ − p 0 2 = O(ǫ 4 ) and t 0 m ǫ − m 0 2 = O(ǫ 2 ) which yields exactly the asymptotic behavior predicted in Theorem 1. In Table 2, we display the corresponding results obtained with the proposed discretization scheme. Also here we can exactly observe the convergence rate Table 2 Error p ǫ h,τ (t k ) − p 0 h,τ (t k ) 2 + k j=1 a m ǫ h,τ (t j ) − m 0 h,τ (t j ) 2 = O(ǫ α ) between the discrete approximations for the hyperbolic problem and the parabolic limit problem for different values of ǫ and time steps t k and observed convergence rate α. Discretization with h = 0.01 and τ = 10 −5 . predicted by Theorem 3. Note that the second term in the error measure is strictly increasing w.r.t. time, which together with the exponential convergence to steady states explains that the error is almost independent of t here.
In Table 3, we report about further numerical tests to illustrate the independence of the results on the discretization parameters. Again, the observations are in perfect agreement with the theoretical predictions made in Theorem 3.
Let us finally note that the previous formulas reveal that the error between the solutions of the hyperbolic and the parabolic problem actually behaves like for t ≫ ǫ.
approximations obtained with the method discussed in Section 3. A theoretical explanation of this fact would require a refined analysis which is left for future research. | 2017-04-18T14:32:05.000Z | 2016-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "22606d15132d3c436f1a74344483a50027e324b5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1701.04011",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "22606d15132d3c436f1a74344483a50027e324b5",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
256454253 | pes2o/s2orc | v3-fos-license | Securitisation of Ukrainian Critical Infrastructures: The Case of the Failure of SCADA System in Protecting the Power Grids
Critical infrastructures are an important element to support social cohesion in a certain area. Therefore, it is necessary to protect critical infrastructures in order to maintain the sustainability of the assets. There are many attempts by states to control the security of their critical infrastructures, one of them is using the Supervisory Control and Data Acquisition (SCADA) system, a control system in which to monitor and retrieve data under the supervision of an operator. However, although countries are aware of the preventive action over their critical infrastructures, it is still possible to fail. In this case, Ukraine which has a relatively secure control system failed in protecting its power grids from multiple hacker attacks which contributed to blackouts in December 2015. The devastating failure of Ukraine’s security system has led public opinion to point a finger at Russia since the relationship between both countries is at stake. In this sense, Ukraine issued a speech act to securitise its critical infrastructures. By exercising securitisation theory, this article would discuss further the fruitfulness of the speech act after the failure of the security system in protecting Ukraine’s power grids.
Introduction
Critical infrastructures are the critical elements that keep a nation's longevity sustainable. Infrastructures can act as the conduit for communication across physical distances, bringing people from different places and establishing the foundation for modern economic and social systems (Larkin, 2013). In other words, critical infrastructures are made up of resources and systems, whether virtual or physical, that are so crucial to the country's economic health, national security, public health, and safety, or any combination of these, that any interruption of the services could have a catastrophic effect (Alcaraz & Zeadally, 2015). Thus, the development of infrastructures results in the creation of infrastructural systems that aid the organization of people's daily lives (Hughes, 1987, 1993in Larkin, 2013. Since infrastructures are crucial for both economic and social elements, if critical infrastructures are weak, economic and social progress will be difficult (Yusta et al., 2011). notes that its critical infrastructures are essential for its economic prosperity, military capacity, and political vitality (Collier & Lakoff, 2008).
The newly established emerging threats have contrasting distinctions with the framework of the Cold War, namely technological accidents, energy crisis, and terrorism, resulting its difficulties in predicting and calculating its real impact (Collier & Lakoff, 2008 (Yusta et al., 2011).
Every country's economy relies on its critical infrastructures, particularly its power grid control system. Power grid control systems are the subject of inexpensive cyber-attacks that have the potential to affect entire nations or even continents. This is why cyber security safeguards for the power grid control system are crucial (Jarmakiewicz et al., 2017 (Aradau, 2010). Based on the definition of critical infrastructure above, power grids are categorized as critical infrastructure.
Moreover, by using the theory of CIP, we would examine the SCADA systems as a CIP for protecting Ukraine's power grids from potential threats.
We would also include the securitization theory to back up the CIP theory. However, we must first understand what security is in order to discuss securitization properly. According to Buzan et al. (1998), security is the maneuver that puts politics beyond the established rules of the game and frames the issue as a particular sort of politics or as above politics. Security is a selfreferential discipline, meaning that only inside this practice does a problem become a security problem. It is more likely because the problem is presented as such a concern than because a genuine existential threat is discovered. Although politicization can be assumed to be a more threat, and it is, therefore, subject to (Buzan et al., 1998).
As a result, Buzan et al. (1998) concluded that the true definition and criteria of securitization are constituted by the intersubjective production of a salient enough existential danger to have a significant political impact. Furthermore, according to Buzan et al. (1998), a case of securitization is performed if, in terms of the priority and urgency of an existential threat, the securitizing actor has managed to break free from either procedures or rules that they would be bound by.
In theory, the securitization process is referred to as a speech act, which, unlike a sign referring to something more tangible, is more akin to the utterance itself. As Austin (1975in Buzan et al., 1998 Chronicle, 1996in Buzan, et al., 1998, which could potentially lead to actions within the computer field but with no cascading effects on other (Buzan, et. al., 1998). We will recognize the evaluation of | 2023-02-01T16:13:21.910Z | 2022-12-31T00:00:00.000 | {
"year": 2022,
"sha1": "cbd88752cb9fd28788cd2c43eb459cedbf105196",
"oa_license": null,
"oa_url": "https://doi.org/10.33822/mjihi.v5i2.4878",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9c3e9a6ed66875fc1024aaf8bca83a91937a5b19",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
118152738 | pes2o/s2orc | v3-fos-license | Anomalous charge and negative-charge-transfer insulating state in cuprate chain-compound KCuO_2
Using a combination of X-ray absorption spectroscopy experiments with first principle calculations, we demonstrate that insulating KCuO_2 contains Cu in an unusually-high formal-3+ valence state, the ligand-to-metal (O to Cu) charge transfer energy is intriguingly negative (Delta~ -1.5 eV) and has a dominant (~60%) ligand-hole character in the ground state akin to the high Tc cuprate Zhang-Rice state. Unlike most other formal Cu^{3+} compounds, the Cu 2p XAS spectra of KCuO_2 exhibits pronounced 3d^8 (Cu^{3+}) multiplet structures, which accounts for ~40% of its ground state wave-function. Ab-initio calculations elucidate the origin of the band-gap in KCuO_2 as arising primarily from strong intra-cluster Cu 3d - O 2p hybridizations (t_{pd}); the value of the band-gap decreases with reduced value of t_{pd}. Further, unlike conventional negative charge-transfer insulators, the band-gap in KCuO_2 persists even for vanishing values of Coulomb repulsion U, underscoring the importance of single-particle band-structure effects connected to the one-dimensional nature of the compound.
The electronic properties of strongly correlated transition metal (TM) oxides −which consist of partially filled TM d-orbitals hybridized with the ligand (oxygen) p-orbitals− are effectively categorized under the well known Zaanen-Sawatzky-Allen (ZSA) phase diagram [1][2][3], a guiding principle for materials scientists that takes into consideration the on-site d-d Coulomb interaction energy at the TM site (U ) and the ligand-to-TM charge transfer energy (∆).There is an intriguing region of the ZSA phase diagram of compounds with negative values of ∆ that has been less explored [4][5][6][7][8][9][10].In TM oxides the value of ∆ decreases by increasing the valence (oxidation) state of the TM ion, and for unusual high valence states ∆ can even become negative [8].Such high-valence compounds are very unstable, and only a few pristine negative ∆ compounds exist (see Table I).For such highly covalent compounds, it is energetically favorable to transfer an electron from the ligand to the metal ion, as the energy cost ∆ for this process is negative, giving rise to a large ligand-hole character and usually metallic nature of the ground state.However, there exists a very select number of compounds, which are insulating while having negative or extremely small values of ∆, driven by a combination of strong metal-ligand hybridization either with electronic correlations, which are known as the correlated covalent insulators [4,6,15], as in La 2 CuO 4 [16], Sr 2 CuO 3 [17] or with single-particle band-structure effects, as in NaCuO 2 [5,7,8,15,18,19].In this work, using X-ray absorption spectroscopy (XAS) experiments, model XAS and density functional theory (DFT) calculations, we have investigated the electronic structure of KCuO 2 [21], and have elucidated the nature of its experimentally observed insulating state.Our results show that KCuO 2 hosts Cu in a formal 3+ valence state, has a negative ∆ and a dominant ligand-hole character on its ground state.We find a charge band gap (∼ 1.24 eV) with preponderance of O 2p states at the valence band and conduction band edges, which originates from strong intra-cluster Cu 3d -O 2p hybridization in this negative ∆ compound and competes with point-charge Coulomb contributions to the crystal-field energies of the Cu t 2 g orbitals.The chain topology driven band-gap persists for vanishing U , which is distinct from the conventional picture of correlated covalent insulators [4,6,15], and also decreases with decreasing values of t pd .The inclusion of strong correlations is, however, necessary to account for the experimental value of the gap.Our work thus establishes that KCuO 2 , similar to NaCuO 2 , is a negative ∆ insulator where the insulating behaviour arises from both single-particle band structure effects from the unique one-dimensional CuO 2 chain geometry and strong electron-electron correlations.M ethods (experimental).−Polycrystalline KCuO 2 in a single-phase orthorhombic CmCm space group [22] was synthesized by mixing KO 2 and CuO powders in 1:1 ratio in Ar-filled glovebox followed by sintering under a dry O 2 atmosphere for 2.5 days at 450 o C [23].XA measurements at Cu L 3,2 -and O K-edges were performed on the 4-ID-C beam line of the Advanced Photon Source (APS) at Argonne National Laboratory, USA.The sample powder was mounted on the holder using carbontape under nitrogen gas atmosphere to ensure minimum exposure to air, and XAS measurements in total-electron-yield (TEY), total-fluorescence-yield (TFY), and in the inverse-partial-fluorescence-yield (IPFY) modes were performed at room temperature without any additional surface preparation.The probing depth in case of the TEY (∼5 nm) is much smaller than that of TFY or IPFY (∼100 nm) [24], and, thus, while TEY studies the under-coordinated surface electronic-structure of a solid, the TFY and IPFY are well-suited to investigate the bulk electronic-structure.For IPFY measurement, the non-resonant O K -edge was monitored, and, thus, IPFY is further free from any self-absorption effects unlike TFY [25].M ethods (theory).−We have performed three sets of complementary calculations.To act as reference XAS spectra, calculations of the Cu L 3,2 XA spectrum on a orthorhombic Cmcm space-group lattice of KCuO 2 [22] (e.g., Figs.1(a-b)) were performed using the Finite Difference Method Near-Edge Structure (FDMNES) code [26].The FDMNES calculations were performed using the full-multiple-scattering theory with a cluster radius of 6 Å around the absorbing Cu atom and an on-site Coulomb energy (U ) of 8 eV.
In order to determine the relative TM-O covalencies, cluster calculations for simulating the Cu L 2,3 XA spectrum of a single CuO 2 planar-cluster with a D 4h symmetry [27] were performed using the Charge Transfer Multiplet program for X-ray Absorption Spectroscopy (CTM4XAS) [28].The charge transfer energy ∆ between Cu 3d and O 2p orbitals is defined as , where E(d n ) is the multiplet-averaged energy for n-electron occupancy on Cu 3d levels and E(d n+1 L) denotes the multiplet-averaged energy obtained after transferring one electron from an O 2p level to the Cu 3d level having n=8 electrons, corresponding to the formal (3+) valence state of Cu.For the CTM4XAS calculations, the basis size was restricted up to one electron charge-transfer from O 2p to Cu 3d.
To determine the density of states (DOS) of KCuO 2 and NaCuO 2 , the rotationally invariant LDA+U scheme of Dudarev et al. [29] was employed in DFT electronic structure calculations.Calculations were carried out with the V ienna Ab initio Simulation P ackage (VASP) [30] using projector-augmented wave pseudopotentials [31,32].The first Brillouin zone was sampled using a 12×12×6 Monkhorst-Pack set of k-points and a 400 eV energy cutoff.Exp. Exp.
Calc.
Calc. Results and discussion.−Two distinct groups of experimentally-observed XAS peaks, one around 930 eV (L 3 region) and another group around 950 eV (L 2 region) can be clearly identified for the Cu L 3,2 -edge of KCuO 2 (c.f., Fig. 1(c)).While the spectral features for the L 3 and L 2 regions are nearly identical, they are separated by about 20 eV due to the 3/2 × Cu 2p core-spin-orbit coupling.Looking around the Cu L 3 region closely, we observe two intense peaks at 930.3 eV and 932.3 eV, which correspond to the Cu d 9 and the Cu d 9 L initial states, respectively [33][34][35].
IPFY mode
The d 9 (Cu 2+ ) peak intensity increases significantly in the TEY mode as compared to the TFY and IPFY modes, indicating an abundance of Cu 2+ valence states on the surface (see Methods section).This Cu 2+ presence is believed to arise due to the presence of surface impurity phases rich in Cu 2+ .Note that similar peaks of d 9 (Cu 2+ ) have been observed for other formally Cu 3+ compounds in the XA spectrum (e.g., NaCuO 2 [5,8,33], CaCu 3 Co 4 O 12 [35] and Cs 2 KCuF 6 [36]).Cu 2+ impurity phases on the surfaces of these metastable compounds arises due to the loss of superficial anionic atoms during XAS experiments in ultra-high vacuum which effectively reduces the valence of surrounding Cu ions [8].Further, we observed that KCuO 2 on exposure to air decomposes into CuO within five to ten minutes.Thus, given these constraints, it is impossible for us to avoid the Cu 2+ related impurity peak in the XAS experiments.Within the bulk, KCuO 2 is not expected to suffer from such anionic losses and, accordingly, much lower intensity Cu 2+ peaks in the bulk-sensitive TFY and IPFY XAS spectra are observed in Fig. 1(c).Some percentage of the TFY and IPFY signals are also contributed from the surface and near-surface region of the sample, which is dominant due to powder nature of KCuO 2 sample as compared to scraped bulk-polycrystalline pellet of NaCuO 2 [33], that still provides significant contributions of the d 9 peak.Since TFY, unlike IPFY, suffers from self-absorption effects [25], it causes the differences in their relative spectral weights.
Focussing henceforth on the IPFY spectrum, as it is both bulk-sensitive and free from self-absorption effects, the main peak given by the d 9 L state arises due to the charge transfer of an electron from the surrounding O atoms into formally Cu 3d 8 (Cu 3+ ) state [33][34][35].Furthermore, distinct multiplet-structures -that are considered to provide clear evidence for the presence of an ionic Cu 3+ (d 8 ) state [33][34][35]-are observed around 940 eV.The presence of significant d 9 L and d 8 intensities suggests that a coherent superposition of both states constitutes the ground state of formal Cu 3+ ions in KCuO 2 , similar to that of NaCuO 2 [33].It is important to note that on a Cu 2p-3d XAS process it is difficult to detect contributions from the d 10 L 2 level to the ground-state.However, such contributions are usually small, as determined by X-ray photo-electron spectroscopy on related systems [8].
To further establish the origin of the various features in the experimental XAS spectra, we simulated the Cu L 3,2 XAS spectra of KCuO 2 that corresponds to the d 8 , d 9 and d 9 L initial state configurations using the FDMNES code.As shown by the vertical guide lines in Fig. 1(c), the calculated XAS spectra corresponds to the d 9 and d 9 L features in the experimental spectra, and the observed ionic d 8 -experimental features can be broadly understood with the calculated spectrum for the d 8 ionic Cu 3+ state.
We now compare the L 3 energy region for KCuO 2 with other systems that host unusual valence states of Cu, such as the optimally-doped YBa 2 Cu 3 O 7−δ (YBCO) [35], LaCuO 3 [35], and NaCuO 2 [33] in Fig. 2(a), after subtraction of the surface Cu 2+ impurity peak [37,38].It is interesting to note that the Zhang-Rice spin-singlet-state, d 9 L [39,40], which arises due to external hole-doping in YBCO by intricate chemical [33], naturally becomes the dominant state in formally Cu 3+ compounds.This hole-doping mechanism is akin to a self-doping effect [20].Judging from the intensity ratios shown in Fig. 2, the d 9 L charge-transfer state appears dominant over the ionic d 8 state for KCuO 2 , NaCuO 2 and LaCuO 3 , thus suggesting that the associated charge transfer energies ∆ for all of these compounds are unusually negative.We note that negative values of ∆ have been already proposed for insulating NaCuO 2 [5,7,8] and metallic LaCuO 3 [14].A closer analysis of the XA shapes on Fig. 2(a) points to spectral differences within several formal Cu 3+ compounds.Let us focus on the differences in the XA spectral features related to the d 9 L state first: The d 9 L peak for LaCuO 3 is broad and can be well described using two peaks, one centered at 930.8 eV and another at 932.2 eV.This splitting occurs from the delocalization of the ligand-hole, due to inter-cluster hybridization effects that are aided by the corner-sharing geometry of the CuO 6 clusters with Cu-O-Cu bond angle of 168.3 • in LaCuO 3 [14] (c.f., Fig. 1(b)).For KCuO 2 and NaCuO 2 , on the other hand, such intercluster hybridization effects are negligible due to the near-orthogonal Cu-O-Cu bond-angle (95.7 • ) between neighboring CuO 4 clusters (c.f., Fig. 1(a)) and a single d 9 L peak is observed.
The d 8 multiplet region of formally Cu 3+ compounds shown by the shaded area in Fig. 2(a) is discussed next.Covalency and ∆ are not independent, since the relative intensities between the d 8 multiplets to the d 9 L peak usually increase with decreasing covalency, and their energy separation increases with larger negative values of ∆ [36].KCuO 2 has stronger multiplet intensities than iso-structural NaCuO 2 , which suggests a larger contribution of the ionic d 8 state to its ground state.Further, the average energy difference between the d 8 multiplets and the d 9 L peak is 5.9 eV and 8.2 eV for KCuO 2 and NaCuO 2 , respectively, thus showing a smaller negative ∆ for KCuO 2 .
For the calculated Cu L 3,2 XA spectra on a single CuO 2 cluster with planar D 4h symmetry, we optimized the parameter values to match the calculated energy separations between the average d 8 multiplets and the d 9 L main peak with energy differences obtained from experiment (Fig. 2(a)).The estimated ∆, thus obtained, turned out to be −1.5 eV and −2.5 eV for KCuO 2 and NaCuO 2 respectively.Furthermore, both the resultant ground states have dominant d 9 L characters, 39%d 8 + 61%d 9 L (36%d 8 + 64%d 9 L) for KCuO 2 (NaCuO 2 ), with a higher ionic character for the ground state of KCuO 2 , as suggested earlier.
The O K-edge XA spectrum −which probes the ligand hole-states− exhibits a pronounced pre-peak for KCuO 2 at 527.6 eV, as seen in Fig. 2(b).The intensity of the O K-edge pre-peak correlates directly with the amount of ligand-hole character in the ground state [36], thus the strong pre-peak in KCuO 2 further establishes a large d 9 L character of its ground state.
Fig. 3 shows the density of states (DOS) projected onto orbital contributions for KCuO 2 and NaCuO 2 , which were found to have insulating gaps of 1.24 eV and 0.62 eV respectively for the U value of 8 eV.The band-gaps in KCuO 2 and NaCuO 2 , however, exist even for U = 0 eV, in agreement with previous observations on NaCuO 2 [7,15,18], highlighting the role of single-particle band-structure effects due to the chain topology in giving rise to the insulating state in KCuO 2 .The inclusion of correlations, however, is essential in increasing the band-gap value as compared to U = 0 eV and bringing it to the agreement with the experimental value [8].Furthermore, the projected DOS shows a strong O character in both valence and conduction band edges [41].The Cu t 2g levels occur between the lower-lying 3d 3z 2 −r 2 and higher-lying 3d x 2 −y 2 levels, as usually observed for one-dimensional CuO 2 chains due to point charge (Coulomb) contribution [42].However, the t 2g levels is intriguingly seen to have Cu (d xz ) and Cu(d yz ) character immediately below E F and Cu d xy character only at further lower energies, which is different from a point charge (Coulomb) contribution to crystal-field splitting.Similar effect has been observed in Cs 2 Au 2 Cl 6 , and arises from a dominant pd covalency contribution in case of negative ∆ compounds [10]; the inversion of the t 2g orbitals thus further confirms the negative ∆ in KCuO 2 .
We also performed a Bader analysis [43] to understand the charge density distribution over electronic orbitals.The total occupation of Cu 3d-shell in both systems is 8.8, which represents a mixture of d 8 and d 9 states, in qualitative agreement with cluster calculations and establishing the superposition of both contributions to the ground state of formally Cu 3+ ions in KCuO 2 and NaCuO 2 , as discussed earlier.
Conclusions.−We have described the presence of the anomalous charge state of Cu in KCuO 2 from experiment and theory.We established the negative charge transfer energy of the KCuO 2 ground state and its dominant ligand-hole character, which arise due to large intra-cluster hybridization effects and remain localized due to weak inter-cluster hybridizations.Localized cuprate like Zhang-Rice singlet state thus occur at every unit cell, which consequently gives rise to the experimentally observed insulating and diamagnetic character of KCuO 2 [20,21].Moreover, KCuO 2 exhibits strong d 8 related multiplet structures, resulting from the large ionic Cu 3+ character of its ground-state.KCuO 2 is shown to belong to the unusual class of covalency driven negative charge transfer with the correlated gap that is adiabatically connected to the single-particle gap arising the chain
Figure 2 .
Figure 2. (Color online) (a) Cu L3 X-ray absorption (XA) spectra of KCuO2, NaCuO2 [33], LaCuO3 and YBa2Cu3O 7−δ (YBCO).The main Cu L3 peak in KCuO2 and NaCuO2, and the shoulder in YBCO around 932 eV correspond to the d 9 L Zhang-Rice singlet state.The d 8 multiplet structures six-fold-increased for easier observation are also shown.(b) The O K-edge XA spectrum of KCuO2 consists of a pronounced pre-peak around 527.6 eV (shaded area) suggesting a large ligand-hole character of its ground state.
Figure 3 .
Figure 3. (Color online) Density of states for (a) KCuO2 and (b) NaCuO2; the total contributions from a given atomic species are indicated by trendlines and the orbital projections are shown by colored area plots.A U of 8 eV was used for these calculations.KCuO2 is found to exhibit a larger bandgap than NaCuO2.
Table I .
Coulomb repulsion U and charge-transfer ∆ energies (in units of eV) for some transition metal oxides with unusually high formalvalence states for the B site (Fe, Co, Ni, Cu) cation. | 2015-04-26T17:26:04.000Z | 2015-04-23T00:00:00.000 | {
"year": 2015,
"sha1": "89bae5f0f94e239ef21349c9dc2502c56adcfb72",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.92.201108",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "732bfcc97bc8a9a23b79a5e28c28802bdcb514e7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
211538781 | pes2o/s2orc | v3-fos-license | Infection control link nurse programs in Dutch acute care hospitals; a mixed-methods study
Background Infection control link nurse programs show considerable variation. We report how Dutch link nurse programs are organized, how they progress, and how contextual factors may play a role in the execution of these programs. Methods This mixed-methods study combined a survey and semi-structured interviews with infection control practitioners, based on items of the Template for Intervention Description and Replication (TIDieR) checklist. Results The Netherlands has 74 hospitals; 72 infection control practitioners from 72 different hospitals participated in the survey. Four of these infection control practitioners participated in interviews. A link nurse program was present in 67% of the hospitals; responsibility for 76% of these programs lied solely with the infection prevention and control team. The core component of most programs (90%) was education. Programs that included education on infection prevention topics and training in implementation skills were perceived as more effective than programs without such education or programs where education included only infection prevention topics. The interviews illustrated that these programs were initiated by the infection prevention team with the intention to collaborate with other departments to improve practice. Content for these programs was created at the time of their implementation. Infection control practitioners varied in their ability to express program goals and to engage experts and key stakeholders. Conclusions Infection control link nurse programs vary in content and in set up. Programs with a clear educational content are viewed as more successful by the infection control practitioners that implement these programs.
Introduction
Healthcare-associated infections are the most frequent adverse event for patients admitted to hospitals, and an important cause of morbidity and mortality [1,2]. Careful infection prevention and control (IPC) measures can prevent up to a third of these infections [3]. IPC measures are laid down in guidelines and policies at the national and international level [2,3]. Implementation of these guidelines is usually the task of infection prevention and control teams. In many Dutch hospitals these teams are supported by infection control link nurses (ICLN) [4]. In all countries were ICLN have been introduced, these nurses act as a link between colleagues in their own clinical area and the infection prevention and control team, and help raising awareness of infection control by educating colleagues and motivating staff to improve practice [4,5].
Review of the literature on ICLN show that link nurse programs have been implemented all over the world. The majority of this literature originates from the United Kingdom and describes variation in how ICLN programs are organized and implemented [6]. This variation relates to all aspects of such programs -i.e. responsibilities and tasks of ICLN, activities for and education of ICLN, and competences that are required to fulfill the ICLN role [6][7][8]. The few studies that have evaluated effectiveness of these programs revealed that compliance with hand hygiene guidelines and incidence of MRSA infections indeed improve when ICLN are active [9,10]. However, these studies do not describe their ICLN program in detail nor elaborate on the contextual factors that may have contributed to these improvements. Contextual factors include factors that are not part of the ICLN program such as cultural, organisational and management characteristics of the hosptial, but do play a role in the implementation of IPC practices [11,12]. Examining the variation of existing ICLN programs, the assessment of contextual factors that have led to this variation and the evaluation of these programs can reveal opportunities to improve their value and to reduce their inefficiencies. We therefore aimed to describe how Dutch ICLN programs are organized and how they progress. Furthermore, we sought to explore the contextual factors that may have influenced the implementation of these programs.
Study design
In a mixed-method study, we combined a crosssectional survey with additional semi-structured interviews, based on items of the Template for Intervention Description and Replication (TIDieR) checklist [13]. The TIDieR checklist is an extension of the CONSORT 2010 and SPIRIT 2013 statement and was designed to guide the description of trial interventions in sufficient detail to allow their replication. It has proven to be also applicable for reporting and evaluation of complex interventions in non-trial settings [14,15]. The checklist consists of items concerning: the name of the intervention, the rationale, theory or goal of intervention elements, procedures; providers; how the intervention was delivered and where; the number of times the intervention was delivered and over what period of time if it was tailored, adapted or modified; and if fidelity was assessed.
To describe the Dutch ICLN programs we developed a survey. Survey questions were based on recent literature on ICLN and categorized according to the TIDieR checklist items [6,16,17]. The survey contained multiple choice questions, some with multiple answer options. Three infection control practitioners and an epidemiologist pilot tested the survey. After adjustments it was divided in five parts. The first part contained questions on the presence of an ICLN program or the intention to set up such a program. The second part zoomed in on tasks, goals, and activities of the link nurses. In the third part, infection control practitioners were asked which competences they consider important to fulfill the ICLN role. The fourth part covered the educational content and the evaluation of the program. In the final part, respondents were asked to what extent they were able to accomplish their IPC goals through the help of ICLN. This was expressed on a 10 point Likert scale.
Cotterill et al. recommended to describe how contextual factors may have influenced the execution of the intervention to compile a more realistic image of implementation in real life practice, and proposed to extend the TIDieR checklist by four items [18]. These items include the incorporation of the perspectives of those who provided the intervention, the stage of implementation (e.g. from proof of concept to long term sustainability) the intervention has reached, a description of adaptations made to any item in the checklist, and an outline of factors which had impact on how the intervention was implemented.
To explore how contextual factors had influenced implementation and to investigate the real life practice of ICLN programs, selected infection control practitioners were interviewed in a semi-structured way. The interviews allowed the additional exploration of personal views, experiences and perceptions on why and how specific components of the ICLN program were chosen, how the program was realized in practice, and how it changed over time [19,20]. A topic list (Table 1) based on the checklist extensions as described by Cotterill et al., guided the face to face interviews.
Data collection
During a National Congress for Dutch infection control practitioners in April 2018, surveys were distributed to and collected from one infection control practitioner per Dutch hospital (n = 74) with inpatient departments. One week after the congress, infection control practitioners who did not return their survey were contacted by telephone. To further explore survey answers, we conducted semi-structured interviews with infection control practitioners between July 2018 and October 2018. To explore multiple perspectives a purposeful sampling technique was applied [20,21]. Selection of infection control practitioners was based on the duration of the program in their hospital and how the practitioner graded the effects of the program. The interviews were conducted by one researcher (MD). Interviewees were informed about the study goals, and that there were no right or wrong answers. They were assured anonymity and provided a written consent. The results of the interviews are reported according to the Consolidated Criteria for Reporting Qualitative Research checklist [22].
Data analysis
Surveys and interviews were analysed separately. Subsequently, survey and interview outcomes were compared to integrate the findings [23]. Surveys were included in the analysis if ≥50% of questions were answered. Survey data were analysed using descriptive statistics. Items that were identified as best practices in ICLN programs in previous studies were compared [6]. These best practices are the availability of a written role profile, education on infection prevention topics as well as on implementation skills, and support of ICLN by the ward manager. Differences in median values for the achievement of program goals between groups were analysed with the Mann-Whitney U test for comparison of two groups and the Kruskal-Wallis test for comparison of three groups. A post-hoc test was performed with a Kruskal-Wallis test with Bonferroni correction for a pairwise comparison of the educational programs. A boxplot was created based on this comparison. Analyses were performed with R Studio version 5.0-0 (R Foundation for Statistical Computing, Vienna, Austria).
Interviews were audio recorded, transcribed verbatim (MD) and analysed by thematic analyses with an iterative, inductive approach [24,25]. Two team members (MD & RM) read the transcripts several times and independently coded the transcripts to reflect the underlying meaning of the text. Codes were compared and discussed to reach consensus on code names and meaning (MD, RM & IJ). A codebook was created. These codes were clustered into categories and ultimately into themes. During team meetings the influence of the researchers' backgrounds (Public and Occupational Health, Clinical Microbiology, and Infection Control) was reflected on to further enhance research rigor [26]. Transcripts were analysed with Atlas. Ti software version 7.0 for Windows.
Results
In total, 72 of 74 questionnaires were returned (response rate 97.3%) (Supplementary materials 1). Forty-eight (66.7%) came from hospitals with an ICLN program in place. Eighteen (25%) came from hospitals that were planning to implement such a program in the near future. Six (8.3%) reported the ceasing of their link nurse program due to lack of support from ward and hospital management (n = 2), lack of time and power that was allotted to ICLN (n = 3), or other hospital priorities (merger) (n = 1). Nine Dutch synonyms were found for these programs. Participants completed all questions in 47 (65.7%) of 72 surveys. Each participant completed 50% or more of the questions; all surveys were included in the analysis. Four infection control practitioners were interviewed. Duration of the programs in these hospitals ranged from three to 8 years. The interviewees graded the accomplishment of their goals thanks to the help of ICLN as four (n = 1), six (n = 2), and eight (n = 1) on the 10-point Likert scale. The interviews lasted between 42 and 54 min.
From 523 initial codes, 62 categories and ultimately six themes were identified, four of these were linked to the survey results ( Table 2). Quotations are included for illustration.
The start of ICLN programs
In all hospitals where the infection control team initiated an ICLN program, the initiative for the program originated from their need to collaborate with other departments in the hospital, and from the need to disseminate practical IPC knowledge. The actual start of these programs was related to a more positive overall attitude of hospital management and health care workers towards IPC; it was sparked by threats such as a recent Ebola outbreak and the rise of antimicrobial resistance. The occurrence of outbreaks of resistant strains in hospitals, and pressure from external bodies (e.g. Joint Commission International) increased the urge for hospital management to address IPC as an integral part of patient safety and quality of care. It created opportunities for support for infection control practitioners to start an ICLN program.
we needed this outbreak of vancomycin-resistant enterococci to convince our hospital management that we needed to implement an ICLN program [interview 4] In the first phase of setting up a program, the infection control practitioners pitched and discussed their ideas with middle and higher management.
I have been to all wards and talked to the management … we were preparing our hospital for a JCI accreditation [interview 1]
The characteristics of ICLN programs
Infection control practitioners aimed to build a structural relationship with the link nurses in order to exchange information on IPC practices and to improve compliance with IPC protocols. I hope to learn each link nurse to detect potential infection prevention risks …that they will contact me when they have detected a risk or when they have an IPC related question... I want to team up with these nurses [interview 4] The top three goals of ICLN programs were to increase awareness for infection prevention, to create a liaison between the wards and the IPC team, and to make ICLN a source of information for their peers. Some infection control practitioners were able to described these program goals in a clear manner and incorporated knowledge and skills from other departments (e.g. quality department, training and education department) to supplement their own and ICLN' competences whereas others found it challenging to prepare a plan of action.
as an infection control practitioner I am obliged to support link nurses, but I don't know how to do that best [interview 2]
To achieve the program goals, the most sought qualities for ICLN were being motivated, proactive, and enthusiastic. Infection control practitioners' views on the interaction with the ICLN and communication in the context of the ICLN program varied. Some infection control practitioners focused their efforts on providing support for the ICLN in implementing IPC policies, where others focused more on receiving support from the ICLN in monitoring the compliance with IPC measures.
you need to listen to the needs of your link nurses...I want to serve them and support them to disseminate their knowledge to their peers on the wards [interview 3]
The preparation of ICLN programs
Most ICLN were nominated by the ward management; clinical experience as a health care worker was not considered necessary. Not only nurses were included, in most hospitals other disciplines and departments also participated. In one hospital physicians were involved. Infection control practitioners described that they developed their programs while implementing them at the same time. Programs were adapted as IPC teams searched for an optimum strategy to collaborate with their link nurses to improve practice. Adjustments to the program were based on lessons learned during implementation and the dynamic IPC priorities. Infection control practitioners query what sort of training to provide, what topics to educate on and how to stimulate ICLN to be proactive.
Our link nurse meetings must become a bit more interactive. We need to ask , what did you learn? What will you do differently tomorrow? What is the next issue you will address? [interview 3]
The education of ICLN
In almost 90% of the hospitals, programs for ICLN included education, given in sessions with a median duration of two hours, at a frequency of one to six sessions per year. Education of ICLN was generally shaped as inhouse training and started with an introduction course. Responsibility to achieve the ICLN program goals lied solely with the IPC team in two thirds of the hospitals. The IPC teams perceived the introduction of ICLN networks and the activities of ICLN as important assets that helped them to achieve their infection control goals. They scored this importance with a median of 7.0 (IQR 6.0-7.0) on a 10-point Likert scale. Table 3 displays best practices in ICLN programs and how participants perceived the role of these best-practices in achieving their program goals. In 72% of the hospitals a written role profile was available. The median value for the perceived accomplishment of programs goals for these hospitals did not differ from hospitals that did not provide a written role profile. Seventy-one percent of infection control practitioners reported support from ward management for ICLN in their hospital. The median value for perceived accomplishment of programs goals also did not differ when compared to programs that did not report this support. ICLN programs that included education on infection prevention topics and training in implementation skills were perceived as more effective (median 7.0, IQR 7.0-8.0) than programs without such education (median 5.0, IQR 2.5-6.8) or programs where education included only infection prevention topics (median 6.0, IQR 6.9-7.5) ( Table 4) (Fig. 1).
The progression of ICLN programs
To better support link nurses with department-specific questions or projects, some infection control practitioners scheduled regular meetings at the department in addition to, or instead of, the hospital wide educational meetings. Furthermore, some infection control practitioners involved ward management in ward-specific ICLN activities to interweave the hierarchical structures with the ICLN program activities. This enabled them to influence both the formal and the informal network to facilitate the program goals and created the opportunity to generate more ward-based support for the ICLN. In parallel, it created an opportunity to increase engagement of other infection control practitioners with the program. Occasionally, meeting attendance by ICLN was registered and reported to the management.
at the start of this program ICLN educational meetings were mandatory… at that time, we were in the middle of an outbreak, we didn't have enough time to educate our link nurses... nowadays we do not educate in central meetings, we leave it up to the individual IPC team members to maintain intensive contact with their wards and their link nurses. Each Infection control practitioner is responsible for their own contacts and for what is going on in those departments [interview 4] Infection control practitioners described the challenge to develop a program that interconnects ICLN of various departments, to create opportunities for ICLN to exchange experiences and ideas. The variation in work environment and training background is considered to cause this lack of interaction between ICLN of different departments.
we initially wanted to bring link nurses from clinical wards and outpatient clinics together …. during the training it turned out that there was a big difference in knowledge between those two groups…. and that did not correspond so well. They were not able to have meaningful discussions [interview 4] The limited time for IC tasks available for link nurses and for ICLN program tasks of the IPC team was mentioned as a barrier to the implementation of ICLN programs.
last year we could not start the ICLN education for new link nurses …the time was allocated for general education of nurses on the new electronic patient files program [interview3]
The evaluation of ICLN programs
Half of the ICLN programs have been evaluated. Most evaluations (15/22) were based on the satisfaction of stakeholders with the program. Six hospitals evaluated their ICLN program in relation to the adherence to IPC guidelines. Two hospitals evaluated their program in relation to the prevalence of nosocomial infections.
The majority of hospitals that evaluated their program (17/20) reported positive effects. From the interviews arose the impression that these conclusions were based on random observations during ward rounds and gut feeling. Reported effects seemed related to practical issues (e.g. being able to find IPC Table 3 Comparison of best practices for ICLN programs with perceived accomplishment of program goals
Discussion
This mixed methods study provides a detailed overview of infection control link nurse programs in the Netherlands and gives a broader understanding of the factors that can influence the content of these programs and their implementation in acute care hospitals. It confirms the well-known variation in these programs. In addition, our approach permitted us to quantify this variation, and to find opportunities to reduce inefficiencies and to improve the value of these programs. This, to the best of our knowledge was not done before. Two thirds of Dutch hospitals have an ICLN program in place. Although programs vary widely, education is a core component of nearly all of these programs. ICLN programs are often set up and led solely by the IPC team. Our survey showed that infection control practitioners were more satisfied with their ICLN program if they were able to incorporate training in implementation skills in their educational program. From the interviews it transpired that infection control practitioners seemed more satisfied if they were able 1) to express a more coherent vision and more long term strategic goals 2) to involve more experts (e.g. educational experts) in the enhancement of their program and 3) to engage more key stakeholders, including management, and their direct colleagues, the IPC team, to create support. These aspects therefore, appear useful to keep in mind when planning improvements of existing ICLN programs or when setting up new programs. Overall, our results emphasize that to improve the ICLN programs, infection control practitioners need to have sufficient skills to select and apply appropriate implementation strategies, and to evaluate these strategies to continuously adapt to the dynamic hospital context. In line with this, Gilmartin and colleagues suggests that infection control practices can indeed improve if implementation strategies are systematically considered and applied [27]. The 2017 Geneva Think Tank, a panel of international experts, concluded that implementation science must be a priority in infection prevention [28]. In agreement with our findings it stresses the importance for infection prevention experts as well as other health care workers (e.g. ICLN) to improve their implementation skills.
Education of the link nurses is seen as the core element of ICLN programs although the effect was not systematically measured. Grol et al. nicely summarized the evidence that shows that the dissemination of research findings or guidelines through education can be helpful to realize simple changes in daily practice [29]. However, to improve IPC guideline adherence behavioral change is a prerequisite and such change requires more complex [29][30][31]. Considering our findings in the light of recommendations made by the World Health Organization, we suggest that ICLN programs should be designed as multimodal interventions [32].. The multimodal approach includes: (1) a comprehensive plan of education, training and communication, (2) the engagement of hospital and ward management, and (3) audit and feedback [28,32]. It is also important to understand the potential barriers for the implementation of an ICLN program to fit the program to the local context, and to be able to intervene to remove these barriers [29]. We agree with Cunningham et al., that to engage other stakeholders and to collaborate with direct colleagues can help in preventing vulnerability of the program with respect to sustaining network activities [33]. Audit and feedback is essential to boost implementation of IPC policies and can yield valuable input for the evaluation of effects of and refinements to the ICLN program [32,34]. Finally, and possibly most importantly, ICLN programs should be considered as an integral component of infection prevention and control programs and not as a self-contained project [32].
A major strength of this study is the high survey response rate. It contributed to the representativeness of our findings. We performed additional interviews to deepen our insight in the findings from the survey. This triangulation reduced the chance of single source bias [35]. Furthermore, the interviews reflect real life strategies used by infection control practitioners to disseminate their knowledge through link nurse programs. A deeper understanding of the structure and characteristics of these programs is vital to further develop wellfunctioning programs [33]. This study has limitations. As the IPC community in the Netherlands is small, respondents might have chosen to respond in a more positive way than to choose the responses that reflected their true thoughts. This social desirability bias could distort the results in the survey and the interviews [36]. To decrease the chance for this bias we assured participants in the survey and in the interviews their anonymity; we also explicitly made it clear that there were no right or wrong responses [36].
The interviews were performed to ad real world examples from link nurse programs to the survey results; the number of interviews was small and therefore may have only provided a limited number of points of view. We provided interview quotes, to enhance transferability of our findings [37].
A follow-up study using social network analysis could operationalize the social structure and cohesion of ICLN networks, their relevance to the implementation of IPC guidelines and clarify how to improve network-based processes to transfer IPC knowledge and support program goals [38][39][40].
Conclusion
Infection control link nurse programs in Dutch hospitals originate from a need to collaborate with, and to disseminate practical IPC knowledge to other departments in the hospital. The start of these programs is related to a more positive overall attitude of hospital management and healthcare workers towards infection prevention and control. Although programs vary widely, education is an overall core component. Efforts to improve the uptake of IPC guidelines through ICLN programs should focus on enhancing infection control practitioners' and link nurses' knowledge on implementation science and designing these link nurse programs as multimodal interventions. To evaluate the contribution of ICLN programs to the implementation of IPC guidelines it is necessary to audit the program effects and to perform well-designed effectiveness studies. Social network analysis could contribute to understanding how knowledge on infection control and prevention is transferred best.
Additional file 1. Response rate
Abbreviations IPC: Infection prevention and control; ICLN: Infection control link nurses; TIDieR: The Template for Intervention Description and Replication checklist | 2020-02-27T21:42:32.438Z | 2020-02-27T00:00:00.000 | {
"year": 2020,
"sha1": "0e4410635ecefc8e723c1c07d020b4b8bd78793f",
"oa_license": "CCBY",
"oa_url": "https://aricjournal.biomedcentral.com/track/pdf/10.1186/s13756-020-0704-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b91d7eee3cd2d9d628494413197e491d91fc797",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52010076 | pes2o/s2orc | v3-fos-license | Discrete gradient descent differs qualitatively from gradient flow
We consider gradient descent on functions of the form $L_1 = |f|$ and $L_2 = f^2$, where $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is any smooth function with 0 as a regular value. We show that gradient descent implemented with a discrete step size $\tau$ behaves qualitatively differently from continuous gradient descent. We show that over long time scales, continuous and discrete gradient descent on $L_1$ find different minima of $L_1$, and we can characterize the difference - the minima that tend to be found by discrete gradient descent lie in a secondary critical submanifold $M' \subset M$, the locus within $M$ where the function $K=|\nabla f|^2 \big|_M$ is minimized. In this paper, we explain this behavior. We also study the more subtle behavior of discrete gradient descent on $L_2$.
Introduction
In this paper, we consider gradient descent on functions of the form L 1 = |f | and L 2 = f 2 , where f : R n → R is any smooth function with 0 as a regular value. When our discussion applies to functions both of the form L 1 and of the form L 2 , we call the function L. We show that gradient descent implemented with a discrete step size τ behaves qualitatively differently from continuous gradient descent, also called gradient flow.
Under our assumptions on f , the locus of global minima of L is the codimension 1 submanifold M = L −1 (0). In both discrete and continuous implementations, let us start at a random initialization and use gradient descent to move toward M . Under continuous gradient descent, this occurs in one phase. Under discrete gradient descent on L 1 , if one gets close enough to the critical manifold M , there is a second qualitatively different phase of the process. This second phase can be described as gradient flow along M (even though the gradient of L along M is undefined!), minimizing the function K = |∇f | 2 .
Thus we find that when gradient descent on L 1 is implemented discretely, not only is L 1 minimized, but if the process succeeds in reaching a global minimum, there is a second process which implicitly emerges. As a result, discrete gradient descent on L 1 , if run on long time scales, preferentially finds global minima with low values of K over those with high values of K. In this paper, we will explain this behavior, and derive the formula for K. Discrete gradient descent on L 2 is more subtle, but we will study that case as well.
1.1. Acknowledgements. The author would like to thank several people who have been instrumental to this project. Avi Wigderson for asking a question about the distribution of linear neural nets found by gradient descent in the overparameterized setting that gave rise to this entire project, as well as warm encouragement and many enjoyable conversations. Cliff Taubes for providing key ideas for the proof of Theorem 1. Matteo Salvatari and Stephen Wolfram for showing the author how to program neural networks in Mathematica. Nathaniel Bottman for many fruitful conversations and helping to code many simulations testing the ideas in this paper.
The author would also like like to thank Eli Grigsby, Dan Gulotta, Janos Kollar, Pravesh Kothari, Yann LeCun, Shay Moran, Behnam Neyshabur, Matthias Schwarz, and Jacob Tsimmerman for many enjoyable conversations and valuable insights.
2. Discrete gradient descent for L 1 = |f |: theory Let us begin by defining discrete gradient descent with step size τ to minimize any function L : R n → R by the following iterative process.
We begin at an initial position p 0 = (x 1 , . . . , x n ). Suppose that after t steps we have reached the point p t . Then the t + 1 st step is is the gradient in the standard cartesian coordinates of R n .
In this section we focus on a special case of discrete gradient descent. Let f : R n → R be a smooth function for which 0 is a regular value. Let M = f −1 (0). Since we assume 0 is a regular value of f , M ⊂ R n is a smooth codimension 1 submanifold. We are interested in gradient descent on the function L 1 = |f |. Note that L 1 is not differentiable along M , so the gradient descent step is not well defined if p t−1 ∈ M . Generically though, none of the points p i will lie in M .
Theorem 1. Let f : R n → R be a smooth function for which 0 is a regular value. If one implements discrete gradient descent on L 1 = |f | with step size τ > 0, and if for some t the point p t is within approximately τ |∇f | of M , then there will be a second distinct phase. This second phase can be described effectively as gradient descent along M minimizing the function K = |∇f | 2 .
Proof. We are interested in gradient descent on L in the coordinate system (x 1 , . . . , x n ). However, it will be easier to analyze this process in a coordinate system more natural to the manifold M . In general, the gradient of a function computed in a new coordinate system will not agree with the gradient of the function computed in the original coordinate system. However, if the new coordinates are orthogonal with respect to the old coordinates, meaning that the Jacobian of the change of coordinate function is an orthogonal matrix at every point, then the gradient can be correctly computed in the new coordinate system.
We wish to use tubular coordinates around M . M has a normal vector field. Let n denote the unit normal vector in the direction f is increasing. For any p ∈ M , there is a neighborhood U ⊂ M of p such that there is a diffeomorphism φ from U × [− , ] to a tubular neighborhood of U . We assume that an orthogonal coordinate system (q 1 , . . . , q n−1 ) on U exists, which we sometimes denote more compactly as q. For s ∈ [− , ], φ sends (q, s) to q + sn.
In this coordinate system, we can use Taylor's theorem to expand f in the variable s.
where the second equality holds because ∂f ∂q k (q, 0) = s ∂ ∂q k ∂f ∂s (q, 0) = 0 and where the last equality holds because we chose the orientation of n so that ∂f ∂s (q, 0) is positive. We conclude that Now we can express L(q, s) = |f (q, s)|, the function we'd like to do gradient descent on, as Applying ∇ to both sides, we obtain We compute the last coordinate as We denote Then we have Now we can write approximate formulas for discrete gradient descent in these coordinates, dropping the higher order terms. Under discrete gradient descent on |f (q, s)| with time step τ First we consider the evolution in the s-coordinate.
If s t is positive and s t > τ |(∇f )(q, 0)|sgn(s t ), in this step we continue decreasing the value of s toward the manifold M . If s t is in the range then in this step we cross to the other side of M . Similarly if s t is negative. So in the s t coordinate, under discrete gradient descent once we get close enough to M , we oscillate back and forth across M .
On the other hand, both when s t is positive and when s t is negative, So in the q coordinate, regardless of which side of M we are on, we experience discrete gradient descent in the same direction along M because (2.7) depends only on |s t | and not s t . The precise analysis is simplest if we consider a pair of steps. Suppose we start at s t = aτ |(∇f )(q, 0)| for some 0 < a < 1. Then in the next step, s t+1 = (1 − a)τ |(∇f )(q, 0)| and the following step returns to where we started The corresponding steps then in the q coordinates are We conclude that secondary gradient descent can be described as discrete gradient descent within the submanifold M with step size τ 2 minimizing the function K = |(∇f )(q, 0)| 2 M .
3. Continuous and discrete gradient descent on L 1 = |xy − 4|: an example To illustrate the phenomenon described in the previous section, and see the contrast with continuous gradient descent, in this section we make a detailed study of the behavior of both continuous and discrete gradient descent when used to minimize a simple function.
Let f = xy − 4. In this section we consider gradient descent on L 1 = |xy − 4|. We visualize continuous gradient descent by plotting flow lines of the gradient field with a differential equation plotter. Next we implement discrete gradient on a computer, show the path taken, and compare to the continuous setting. Then we mathematically analyze continuous gradient descent and apply Theorem 1 to this case of discrete gradient descent and check that our analyses match the computer simulations.
3.1. Computer simulations. 3.1.2. Discrete descent. In the second iteration of this problem, we will begin at a randomly chosen initial point p 0 = (x 0 , y 0 ) = (1.05, .8) ∈ R 2 and implement discrete gradient descent in mathematica. The locus of global minima of L 1 = |xy − 4| is the hyperbola xy = 4, which is 1-dimensional as expected. During the execution of discrete gradient descent on this function, we observe two phases. The first phase takes us from (1.05, .8) to approximately (2.06, 1.94), the point on the hyperbola that we would converge to under continuous gradient descent. In the second phase, the points p i oscillate around the hyperbola, approximately converging (but never actually converging) to the point (2, 2). After a long time, the steady state is to oscillate around the point (2, 2), along the line x = y.
Theory.
3.2.1. Continuous descent. Now we consider continuous gradient descent on the function L 1 = |xy − 4|. Beginning at an arbitrary initial point p 0 = (x 0 , y 0 ) ∈ R 2 , we wish to find the point p = (x cont , y cont ) that continuous gradient descent converges to. We do so by integrating the gradient field −∇L 1 . We find that if we start at (x 0 , y 0 ) = (C 1 + C 2 , C 1 − C 2 ) and let u = 4 + C 2 1 C 2 2 , then under continuous gradient descent we converge to the point on the hyperbola xy = 4.
3.2.2. Discrete descent. We apply the results of the previous section to the case of discrete gradient descent on L 1 (x, y) = |xy − 4|. Again, we begin at an arbitrary initial point p 0 = (x 0 , y 0 ).
By Theorem 1, there will be two phases. The first phase brings us near (x cont , y cont ). In the second phase, secondary gradient descent emerges, which can be approximated as discrete gradient descent along the hyperbola M where xy = 4, minimizing the function along M .
In this case, M is one dimensional and the locus where |∇f (x, y)| 2 is minimized is the zero dimensional locus {(2, 2) ∪ (−2, −2)}. So under the secondary gradient descent, we end at one of these two points which minimizes |∇f | 2 M along M , just as we observed in the computer calculation. Whether we end at (2, 2) or (−2, −2) depends on which branch of the hyperbola (x cont , y cont ) was on.
4. Discrete gradient descent on L 2 = f 2 : computer experiments Though the two functions |f | and f 2 are similar, discrete gradient descent with step size τ for L 1 = |f | and L 2 = f 2 behave quite differently near M . This is because near M , the norm of the gradient of |f | is approximately constant on each side of M . In contrast, for f 2 , the norm of the gradient goes to 0 as one approaches M . It is true though that at every point p ∈ R n \M , ∇|f |(p) points in the same direction as ∇f 2 (p).
If implemented directly, discrete gradient descent on L 2 does not exhibit secondary gradient descent. As it approaches M , the norm of τ |∇f 2 | goes to zero, and in this case discrete gradient descent converges 1 to approximately the same point that continuous gradient descent does. However, there are many modified implementations of discrete gradient descent used in practice, and secondary gradient descent does emerge in some of them.
In the remainder of this section, we continue our detailed study of gradient descent on |xy − 4| and (xy − 4) 2 by implementing in computer simulations several modified versions of discrete gradient descent on L 2 = (xy − 4) 2 . In some we observe secondary gradient descent, in some we don't. For each we discuss why secondary descent does or does not occur.
Generally, if there is some reason that gradient descent doesn't converge to a point in M but instead bounces back and forth across it M , secondary descent may occur. For L 1 = |f | this bouncing arises because the size of the gradient vectors does not vanish near M . For L 2 = f 2 , even though the gradient field does vanish near M there are still multiple mechanisms that can cause the process to bounce back and forth across M . We will explore several such mechanisms. 4.1. Discrete gradient descent with fixed effective step size. The first modified implementation we consider is discrete gradient descent with normalized effective step size. We begin by establishing some terminology. In the simplest implementation of discrete gradient descent (2.1), we refer to τ as the step size, and τ |∇L| as the effective step size, as it is the distance traveled in R n at each step. In this first modification, we bound the effective step size from below. Fix a cutoff c. As before, we begin at some initial position p 0 . Suppose that after t steps we have reached the point p t . Then the t + 1 st step is With this modification, the effective step size during gradient descent does not go to zero as the sequence approaches M , and the sequence can never converge exactly to a point in M . Instead, the sequence will bounce back and forth near M , and secondary gradient descent has the potential to emerge. In fact, this renormalized gradient field is approximately the same as ∇L 1 near M , so the analysis of Section 2 is a good approximate analysis of the dynamics. So we expect to observe two phases of gradient descent, and during the second phase, to approximately minimize |∇f | 2 along M . Indeed, when implemented for L 2 = (xy − 4) 2 in Matlab, we observe exactly that. 4.2. Discrete gradient descent with -jitter. A second modification of discrete gradient descent that is sometimes implemented involves adding, at each step, a small random vector to the gradient vector. We call this modification discrete gradient descent with -jitter.
In this case, we begin at some initial position p 0 . At the t + 1 st step, where 1,t , ..., n,t are drawn from a gaussian distribution with norm 0 and standard deviation .
In this case, once the sequence gets near M the jitter causes it to bounce back and forth across M indefinitely, which is the first ingredient for secondary gradient descent. However, we do not expect secondary gradient descent along M in this case. This is because each step of the process that induces secondary gradient descent is primarily perpendicular to M , with a very small tangential component. Secondary gradient descent arises from the cumulative effect of these small tangential components over many steps. But in this setting, the jitter is on average equal in the tangential and perpendicular directions, so it masks the tangential flow. We expect that the jitter induces a random walk along M , rather than directed flow to M . Indeed, when implemented for L = (xy−4) 2 in Matlab, we observe exactly that. -noisy gradient descent. In this section, we consider a different noisy modification of discrete gradient descent, which we will call -noisy gradient descent.
We begin at some initial position p 0 . At the t + 1 st step, we take where t is drawn from a gaussian distribution with mean zero and standard deviation and In the previous section, the random component perturbed the process parallel to M as much as it perturbed it perpendicular to M , so the jitter completely overwhelmed the secondary gradient descent and we observed a random walk around M . However, if the added randomness were to perturb the sequence primarily in a perpendicular direction, we may again see the phenomenon of secondary gradient descent. That is what happens here.
Consider our example f = xy − 4. When is small, the critical manifold of f (x, y) = xy − (4 − ) is essentially parallel to the critical manifold of f (x, y) = xy − 4. So if (u, v) is near M , the gradient field of f will push the point away from M primarily perpendicularly. The tangential component of ∇L points in basically the same direction as the tangential component of ∇L, so at each step we expect to move somewhat parallel to M toward the set M minimizing |∇f | 2 . Hence we expect the qualitative behavior of discrete gradient descent in this setting to be similar to that of simple discrete gradient descent in the L = |f | case, with a primary gradient flow bringing the sequence to M , followed by a secondary gradient flow that further pushes it to M ⊂ M .
We will prove this in next section, but for now we implement this process in Matlab and observe the behavior we have just described. 5. -noisy gradient descent for L 2 = f 2 : theory As discussed in the previous section, discrete gradient descent on the function L 2 = f 2 generally does not display secondary gradient descent. However, modified implementations of discrete gradient descent on L 2 do. In this section, we characterize the behavior of secondary gradient descent on M for -noisy gradient descent, as defined in the previous section.
Theorem 2. Let f : R n → R be a smooth function for which 0 is a regular value. Suppose one implements -noisy gradient descent on L 2 = f 2 with step size τ > 0 and initial point p 0 , resulting in a sequence of points p 0 , p 1 , . . .. If for some t the point p t is within approximately τ |∇f | of M , then a second phase of gradient descent will take place. This secondary phase can be effectively described as discrete gradient descent with step size τ 2 2 along M minimizing the function K = |∇f | 2 M .
Proof. We are implementing a modified form of discrete gradient descent on L 2 = f 2 , with update rule where t is drawn from a gaussian distribution with mean 0 and standard deviation and We begin by expressing L t in tubular coordinates, using the Taylor expansion for f .
Next, we'd like to express the update rule in the tubular coordinates constructed in Section 2. As discussed there, the gradient can be correctly computed in those coordinates.
Thus when s is small, the update rule in the q-coordinates can be approximated as During the process of -noisy gradient descent, we expect t to be positive approximately as often as it is negative, because t is drawn from a gaussian distribution with mean 0. So the expected value for the first term s t |∇f |(q, 0) is approximately 0, and we can approximate the update rule in the q coordinates as We conclude there is a secondary phase of gradient descent during which the first term ∇ q s 2 t |∇f | 2 is minimized. To compute the expected value of this vector, we would like to know the expected value of s 2 t . To find this, we need to analyze the dynamics in the s-coordinate. In that coordinate, the update rule is s t+1 = s t − 2τ |∇f |(q, 0)(|∇f |(q, 0)s t + t ) = s t − 2τ |∇f | 2 (q, 0)s t − 2τ |∇f |(q, 0) t This is an example of an AR(1) process, and the expected value of s 2 t is computed as where the numerator is the variance of the noise, and the denominator is twice the coefficient of the second s t−1 term [H94].
This simplifies as τ 2 So we conclude that in the q-coordinate, the expected value for the update rule is q t+1 = q t − τ ∇ q τ 2 |∇f | 2 = q t − τ 2 2 ∇ q |∇f | 2 This is simply the update rule for discrete gradient descent on M , minimizing the function K = |∇f | 2 M . This concludes the proof.
Discussion
It is a surprising but real phenomenon that discrete gradient descent behaves qualitatively differently from continuous gradient descent. In this paper, we have seen that when considering gradient descent on functions of the form L 1 = |f | or L 2 = f 2 , not only are the two qualitatively different, but this difference is robust to multiple modifications of the discrete gradient descent algorithm.
It is striking that beginning at the same initial point, on long time scales the minima found by discrete and continuous gradient descent are different. Under continuous gradient descent, if the sequence does not get trapped in a local minimum, it will converge to a global minimum m ∈ M . Under discrete gradient descent starting at the same point, if the sequence does not get trapped in a local minimum, it will arrive near m, but then continue moving along M toward global minima that minimize not only L but also the function K = |∇f | 2 .
Secondary gradient descent is subtle but it is not a small effect which causes minor changes in the trajectory of gradient descent. Rather, on long time scales, it leads to discrete gradient descent tending toward significantly different minima than continuous gradient descent does when both begin at the same initial point. | 2018-08-14T18:06:58.000Z | 2018-08-14T00:00:00.000 | {
"year": 2018,
"sha1": "bcbb801435a04770ca837fb03c14833ee3df0fdd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bcbb801435a04770ca837fb03c14833ee3df0fdd",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
260714932 | pes2o/s2orc | v3-fos-license | Nicotinamide N-Methyl Transferase as a Predictive Marker of Tubular Fibrosis in CKD
Purpose Chronic kidney disease (CKD) progression is complex. There are not standardized methods for predicting the prognosis of CKD. Nicotinamide N-methyltransferase (NNMT) has been shown to be associated with renal fibrosis. This study aimed to validate NNMT as a prognostic biomarker of progressive CKD. Patients and Methods We explored the relationship between NNMT expression and CKD-related outcome variables using the NephroseqV5 and GEO databases. Additionally, a validation set of 37 CKD patients was enrolled to measure the correlation between NNMT expression levels and CKD outcomes. Furthermore, single-cell RNA sequencing data and the Human Protein Atlas were reanalyzed to investigate the expression specificity of NNMT in the kidney. Finally, to detect the status of NNMT expression with tubular fibrosis in vivo, we constructed a unilateral ureteral obstruction (UUO) mouse treated with an NNMT inhibitor. Results Analyzing the datasets showed that NNMT was expressed mainly in proximal tubule compartments. And patients with high NNMT expression levels had a significantly lower overall survival rate compared to those with low NNMT expression levels (P = 0.013). NNMT was independent of prognosis factors in the multivariate Cox regression model, and the AUCs for CKD progression at 1, 3, and 5 years were 0.849, 0.775, and 0.877, respectively. Pathway enrichment analysis indicated that NNMT regulates the biological processes of tubulointerstitial fibrosis (TIF). In the validation group, NNMT levels were significantly higher in the CKD group combined with interstitial fibrosis. In vivo, NNMT was a high expression in the UUO group, peaking at postoperative day 21. Treatment with an NNMT inhibitor improved renal tubular interstitial fibrosis, and expression levels of FN, α-SMA, VIM, and TGF-β1 were decreased compared with UUO (P < 0.05). Conclusion NNMT was expressed mainly in tubular renal compartments, and associated with CKD prognosis. It holds potential as a diagnostic biomarker for tubular fibrosis in CKD.
Introduction
Chronic kidney disease (CKD) inevitably progresses to end-stage renal disease (ESRD), regardless of the initial cause or level of renal impairment. Early detection of progressive CKD is crucial for implementing early interventions to mitigate severity and prevent complications. Kidney fibrosis is the primary characteristic of CKD progression, and currently, there is no effective treatment for fibrosis. 1,2 Estimated Glomerular filtration rate (eGFR) and proteinuria are widely utilized for clinical diagnosis and treatment monitoring; however, there is ongoing controversy surrounding their use. 1,3 The degree of eGFR variation in these measures represents the renal functional reserve, which only increases once patients have developed renal insufficiency. 4 Proteinuria is a significant risk factor for CKD progression. 5 While proteinuria may regress, remain stable, or progress in renal disorders, early eGFR loss leads to CKD and ESRD. 6 These biomarkers are inadequate for predicting and monitoring disease progression in CKD. 7,8 Consequently, novel biomarkers are needed to enhance the prediction of CKD progression, enabling the identification of high-risk patients who may benefit from precision management and intensified treatment. 9,10 Nicotinamide N-methyltransferase (NNMT), a Phase II metabolizing enzyme, catalyzes the methylation of nicotinamide and other pyridines into pyridinium ions. 11,12 It plays an important role in regulating lipid and glucose metabolism, inflammation, and energy homeostasis. [13][14][15] NNMT, mainly expressed in the liver and some organs (including kidney), is a potential biomarker for predicting oncological outcomes in urological cancers. 16,17 On one hand, some studies have indicated that knockdown of NNMT leads to the death of renal tubular epithelial cells, while overexpression of NNMT inhibits apoptosis of renal tubular epithelial cells. Additionally, NNMT was upregulated in unilateral ureteral obstruction (UUO) mouse and TGF-β1-induced renal tubular epithelial cells, suggesting it may serve as a protective compensatory response to Tubulointerstitial fibrosis (TIF). 18,19 Increased expression of NNMT may be an adaptive immune response and play an active role in muscle fiber repair. 20 On the other hand, some researchers have found that NNMT deficiency improves renal fibrosis. 21 The role of NNMT in kidney disease progression is controversial, and the biological mechanisms by which NNMT contributes to CKD and its progression remain incompletely understood. 17 In this study, NNMT was validated as a potential prognostic biomarker for tubular fibrosis in progressive CKD, and inhibition of NNMT using small molecule inhibitors was found to improve the degree of tubulointerstitial fibrosis. These findings suggest that NNMT could potentially serve as a therapeutic target for renal fibrosis in progressive CKD.
NNMT Gene Expression Analysis on NephroseqV5 Database
NNMT expression analysis was conducted using the NephroseqV5 database. 22 The Ju CKD and Ju CKD2 datasets were utilized to extract NNMT expression. A comparison was made between NNMT expression in CKD tissue and healthy living donor tissue. Co-expressed genes associated with NNMT were extracted from the datasets, and genes with a Pearson's correlation coefficient |r| > 0.5 were screened. For pathway and tissue enrichment analysis of co-expressed genes, the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway and Gene Ontology (GO) term enrichment were performed.
Accessing NNMT Association with CKD Progression
Gene expression data from the discovered cohort of 68 patients were obtained from NCBI's Gene Expression Omnibus (GSE60861). 23 Analysis of the data was performed using the limma package in the R statistical software framework. 24 CKD progression was defined as the development of ESRD or a doubling of serum creatinine during a minimum follow-up period of six months. The stable group consisted of patients who did not develop ESRD or double serum creatinine. 25 Nineteen of the sixty-eight CKD patients who developed ESRD were therefore categorized as the progressive group. Preprocessing of the data included background correction, quantile normalization, and summarization of probes with identical sequences on the array. Differentially expressed transcripts between stable and progressive patients were identified using the SAM (significant analysis of microarrays (SAM) method, with a false discovery rate (FDR) set as <5%.
On the other hand, based on the median value of NNMT, the dataset was divided into low-expression and highexpression groups to explore the relationship between NNMT expression level and clinical features.
Validation of Evaluated NNMT Expression in Patients with CKD
A total of 37 patients, with histopathologically confirmed various glomerular diseases, were enrolled as a validation set. Inclusion criteria were abnormality in kidney structure or function present for more than 3 months, with health implications. 23 Exclusion criteria were acute tubular injury, malignancies, severely impaired hepatic function, and other rheumatic diseases. Renal biopsy tissues were obtained and immediately frozen in liquid nitrogen for Real-time Quantitative PCR (qRT-PCR) to explore the relationship between NNMT expression and progressive CKD. Progressive chronic kidney disease (CKD) is an increased degree of tubulointerstitial atrophy and fibrosis. 26 Therefore, patients that chronic kidney disease with tubulointerstitial fibrosis were defined as TIF/CKD group, and patients with basically normal renal tubules were defined as a control group. All biopsies were assessed by pathologists blinded to patient outcomes. A renal tubulointerstitial lesion is defined as tubulointerstitial fibrosis. Semiquantitative analysis was scored as follows: 0 = no lesion; 1 = lesion of ≤1% of areas; 2 = lesion of 1-25% of areas; 3 = lesion of 26-50% of areas; and 4 = lesion of >50% of areas. 27 This study was approved by the ethics commission of Guangxi Medical University Ethics Committee and was performed under the ethical principles of the Declaration of Helsinki (approval number:2019-KY (0108)). Written informed consent was obtained from all study participants prior to sample collection.
Single-Cell RNA Sequencing Data Download and Processing
To understand the localization of NNMT in the kidney, the Single-nucleus sequencing data (GSE131685) 28 was downloaded from the GEO database. The data included single-cell RNA sequencing of three normal human kidney samples. Downstream analysis was performed in the R statistical software.
Dimensionality Reduction Analysis of Single-Cell Data and Identification of Cell Subpopulations
The MergeSeurat function was used to merge the three kidney data sets. Cells with <500 and >4000 genes and cells with 15% mitochondrial RNA reads were excluded based on the median number of genes, percentage of mitochondrial genes, and mRNA reads. The relationship between the percentage of mitochondrial genes and mRNA reads was detected and visualized, as well as the relationship between the number of mRNAs and mRNA reads (Supplementary Figure 1A and B). To mitigate the batch effect in downstream analysis, the R package Harmony was employed to integrate the scRNA-seq data. The data were then normalized through log-normalization, and the FindVariableFeatures function was utilized to identify highly variable genes. Principle component analysis was performed on these highly variable genes using the RunPCA function (Supplementary Figure 1C
NNMT Expression Analysis in Renal Compartments
The expression of NNMT mRNA was analyzed using data from GSE21785, which included 6 samples from kidney biopsies of transplant Living Donors. Additionally, the expression distribution of NNMT in renal compartments was observed using data from the Human Protein Atlas. 29
Animal Model
The ethical approval of the project was obtained from the Animal Ethics Committee of Guangxi Medical University (No.202101019). All experimental animal procedures were performed following standard ethical rules and the experiments conformed to the Guide for the Care. Male C57BL/6 mice were obtained from Guangxi Medical University Animal Center and housed under standard conditions with a regular light/dark cycle, free access to water, and chow. The mice were randomly assigned to three groups: sham, UUO, or UUO+JBSNF-000088 30 (n=4-6 mice per group). The UUO procedure was performed on all mice, except those in the sham group. The UUO+JBSNF-000088 group received daily oral administration of JBSNF-000088 at a dosage of 50 mg/kg body weight, starting one day after the UUO operation. The sham and UUO-only groups were treated with saline. On days 3, 7, 14, and 21 after the UUO operation, the bilateral kidneys were rapidly harvested under obstruction and washed with saline. The experiments followed the Guide for the Care and Use of Laboratory Animals, and approval was obtained from Guangxi Medical University. Kidneys were obtained and immediately frozen in liquid nitrogen for qRT-PCR, and biochemical assays or fixed in neutral buffered formalin for histochemical examination.
Histology
Renal tissue was fixed overnight in 4% paraformaldehyde, embedded in paraffin, and sectioned into 4 μm slices. Serial sections were prepared for hematoxylin and eosin (HE) staining and Masson trichrome staining. Renal tubulointerstitial lesions, including tubular dilation, tubular atrophy, cast formation, and tubulointerstitial fibrosis, were evaluated. Collagen deposition areas in each mouse kidney tissue were quantified using Masson trichrome staining and analyzed with NIH Image J software.
Quantitative Real-Time PCR (qRT-PCR)
Total mRNA was isolated from mouse kidney tissue and 37 kidney biopsies using Trizol (Invitrogen, Thermo Fisher Scientific). 1 μg of RNA was used to synthesize cDNA with the PrimeScript RTreage-nt kit (Takara, DRR037A) according to the manufacturer's protocol. qRT-PCR was performed with SYBR Green (Takara, Dalian, China) using an Applied Biosystems 7500 Real-Time PCR System (Thermo Fisher Scientific) as instructed. The relative expression levels of the indicated genes were compared with those of GAPDH, and expression fold changes were calculated using the 2-ΔΔ CT method. Each qRT-PCR reaction was performed in triplicate.
Statistical Analysis
Statistical analyses were conducted using SPSS version 23.0 and R version 3.6.3. The data are presented as the mean ± SD. For normally distributed data, a two-tailed unpaired Student's t-test was used for pairwise comparisons, and one-way ANOVA was used for multi-group comparisons. The nonparametric Kruskal-Wallis test and Mann-Whitney U-test were used when the data did not exhibit a normal distribution. Single-factor ANOVA was used for comparison among multiple groups, and pairwise comparison in multiple groups was conducted with an LSD t-test. The log-rank test was used to assess survival, and the Cox proportional hazards model was constructed for univariable and multivariable Cox regression analysis of prognostic factors. TimeROC curve analysis was employed to assess diagnostic value. All tests were two-sided, and a p-value less than 0.05 was considered statistically significant.
NNMT Increases with CKD Progression and is Associated with Renal Function Decline
The Ju CKD Tublnt and Ju CKD Tublnt2 sets were extracted from Nephroseqv5, which includes arterial hypertension, diabetic nephropathy, focal segmental glomerulosclerosis, IgA nephropathy, lupus nephritis, membranous glomerulonephropathy, minimal change disease, vasculitis, and thin basement membrane disease. Compared to healthy living donors, NNMT showed elevated expression in CKD ( Figure 1A). The results demonstrate a gradual increase in NNMT mRNA levels with CKD stage ( Figure 1B). The NNMT expression progressively rises with CKD stage and shows a negative correlation with eGFR ( Figure 1C). Among various types of histopathological nephropathy, NNMT is highly expressed in focal segmental glomerulosclerosis, diabetic nephropathy, and vasculitis ( Figure 1D).
Functional Annotation and Pathway Enrichment Analysis of NNMT-Correlated Genes in CKD Patients
We screened the intersection of candidate gene sets containing 57 genes for co-expressed genes using Pearson's correlation coefficient (|r| > 0.5) from the Juvenile CKD and Juvenile CKD2 datasets in the NephroseqV5 database ( Figure 2A). We extracted the co-expressed genes associated with NNMT expression from the datasets, which are involved in cell-cell adhesion and extracellular matrix structural constituents. According to KEGG pathway predictions, the correlated genes can be grouped into significant pathways, including the PI3K-Akt signaling pathway and Electronic countermeasures (EMC) receptor interaction ( Figure 2B and C). The analyses reveal the involvement of complex biological processes or metabolic pathways in renal tubular fibrosis.
Association of NNMT with CKD Progression in the Discovery Cohort
We used a retrospective cohort of 68 patients with various CKD diagnoses as the discovery cohort to evaluate the association of NNMT with CKD progression. Based on the median value of NNMT, the dataset was divided into lowexpression and high-expression groups. Compared to the low-expression group, the high-expression group showed increased creatinine levels, declined eGFR, and higher comorbidity rates. (Table 1). The survival curves (Figure 3) demonstrated that the median survival time of patients with high NNMT expression was 66 months, and that of patients with low NNMT expression was 88 months. There was a significant difference in overall survival between NNMT mRNA high and low expression groups.
Another, NNMT levels were significantly higher in the progressive group by comparing the CKD progression group with the CKD stable group ( Figure 4A). Univariate Cox regression analysis identified gender (male), serum creatinine, and NNMT as prognostic factors for worsening CKD progression. Theses variables that were significant in the univariate Cox regression analysis ( Figure 4B) were included in the multivariate Cox regression analysis. Elevated NNMT expression was identified as an independent prognostic risk factor for CKD progression ( Figure 4C). The area under
NNMT Increases in Renal Tubular Fibrosis Tissue and is Associated with the Degree of Fibrosis
To assess the clinical relevance of NNMT in CKD, we performed qRT-PCR on kidney biopsy specimens from 37 CKD patients with various nephropathies (Table 2). Compared with the control group, the expression of NNMT was strikingly increased in CKD/TIF group ( Figure 5A). The semiquantitative analysis scoring of renal tubulointerstitial fibrosis (TIF score) demonstrated a positive correlation between the NNMT expression and the TIF score ( Figure 5B). Furthermore, NNMT expression showed a negative correlation with eGFR ( Figure 5C).
Validation of Increased NNMT Levels in a Single-Cell RNA Sequencing Reanalysis
We found that NNMT expression was mainly significantly upregulated in the proximal tubule cells ( Figure 6A-D). In addition, by exploring the Human Protein Atlas (HPA), we confirmed that NNMT expression was mainly significantly upregulated in proximal tubule cells ( Figure 6E), which is consistent with our analysis results.
NMMT Expressed in Tubular Renal Compartments
NNMT was found to be predominantly distributed in proximal tubules based on the Human Protein Atlas. Immunohistochemical staining analysis of normal renal tissue from the HPA data indicated that NNMT was predominantly expressed in tubular compartments rather than glomerular compartments. Furthermore, the baseline gene expression abundances in renal tissues were calculated using dataset from GSE21785, revealing that NNMT was highly expressed in tubular renal compartments compared to glomerular compartments ( Figure 7A and B).
Effect of JBSNF0000-88 Inhibition on NNMT Expression in UUO
Compared to the sham group, NNMT expression in UUO renal tissue began to increase on the third day after the UUO procedure and peaked on the twenty-first day ( Figure 8A). JBSNF0000-88, a small molecule compound used as an NNMT inhibitor in vivo, 31 was administered to UUO mice one day after the operation, resulting in a significant decrease in NNMT mRNA levels compared to the UUO-only group ( Figure 8B). Masson staining demonstrated that compared with the sham group, the UUO group showed significantly increased renal fibrosis, and JBSNF-000088 group significantly improved the degree of fibrosis in the UUO model ( Figure 8C and D). Additionally, the mRNA levels of TGFβ1, α-SMA, WIN, and FN were decreased in the JBSNF0000-88 group compared to the UUO-only group ( Figure 8E). In summary, UUO mice with tubule-specific loss of NNMT exhibited lower levels of interstitial fibrosis. Severe tubulointerstitial lesions, including tubular dilation, tubular atrophy, and tubulointerstitial fibrosis, were observed in UUO mice.
Discussion
The complex etiology of chronic kidney disease (CKD) and the variable rate of development during the latent phase of the disease 32 impede the identification of early markers and effective therapeutics for disease progression. 9,33,34 As a result, CKD has become a rapidly growing global cause of death. 35 The research focus has shifted towards developing a biomarker with high sensitivity and specificity for different stages of CKD and exploring the combined evaluation of multiple biomarkers. 36 Tubulointerstitial fibrosis and atrophy are common features in almost all forms of kidney disease, and their severity has consistently proven to be reliable indicators in biopsies for predicting progression to end-stage renal disease (ESRD). Tubulointerstitial fibrosis is the histopathological hallmark of CKD, and the extent of fibrosis is the best predictor of progression to ESRD. 37,38 However, early detection of interstitial fibrosis and tubular atrophy is challenging and is often observed in the advanced stages of CKD. 39 In this study, we identified target genes through databases and analyzed both the NephroseqV5 and GEO databases to explore the correlation between NNMT mRNA expression levels and CKD outcomes. We found that NNMT is predominantly expressed in proximal tubules based on single-cell sequencing analysis. Additionally, we verified the protective effect of NNMT in kidney tissues of mice with unilateral ureteral obstruction (UUO) and in 37 CKD patients, providing innovative evidence for the anti-fibrotic effect of NNMT from multiple perspectives. The main findings of this study are as follows: a) NNMT is highly expressed in tubular renal compartments; b) NNMT is upregulated in CKD patients and associated with a decline in renal function; c) NNMT has the potential to be a biomarker for CKD progression; and d) NNMT's actions appear to be related to specific effects exerted in tubulointerstitial fibrosis. The degree of renal tubular fibrosis worsened with increased NNMT expression, providing new theoretical support for preventing CKD from progressing to ESRD.
3340
NNMT is a key metabolic regulator that can induce abnormal pathophysiological changes in fibroblasts by affecting genes related to collagen production. 40 Meantime, It can even serve as a biomarker for urological tumor progression. 17,41 The findings of the present study indicate that NNMT was upregulated in CKD populations and associated with both a decrease in renal function and the degree of renal fibrosis, making it a potential biomarker for CKD progression. Through bioinformatics analysis, we found a significant correlation between NNMT mRNA expression and the disease stage and decline in renal function in CKD patients. Multivariate analysis revealed that high expression of NNMT mRNA was an independent prognostic factor in CKD patients. Importantly, NNMT may be stable in body fluids such as plasma, serum, and urine samples. Studies have demonstrated that NNMT expression levels are significantly higher in patients with bladder tumors, and urine NNMT expression levels serve as an accurate diagnostic criterion for early and non-invasive diagnosis of bladder cancer. 42,43 Considering that renal tubular fibrosis can be assessed in CKD patients through blood and urine tests, NNMT has great potential as a biomarker for predicting, monitoring, and improving the prognosis of CKD.
Our study also revealed that NNMT significantly regulates the PI3K-Akt signaling pathways and extracellular matrix (ECM) receptor interaction in CKD. Previous studies have shown that NNMT knockdown effectively inhibits the invasive capacity of clear-cell renal cell carcinoma (ccRCC) cells and plays a key role in cell invasion by activating the PI3K/Akt/SP1/ MMP-2 pathway in ccRCC. 44 We hypothesize that NNMT may play a crucial role in the progression of renal fibrosis through the PI3K-Akt signaling pathway. However, the underlying mechanism still needs to be explored.
Currently, the most widely used model for studying tubulointerstitial fibrosis is UUO, in which surgical obstruction of the ureter leads to hemodynamic changes within the kidney, followed by tubular injury and cell death. We inhibited NNMT expression using inhibitors and detected the expression of related pro-fibrosis factors using qRT-PCR. The results showed that compared to the model group, the inhibitor group exhibited increased expression levels of pro-fibrosis factors and a greater degree of cell fibrosis. High expression levels of NNMT were induced in UUO mice, consistent with previous studies that found increased expression of NNMT in mouse models, indicating its involvement in oxidative stress, inflammation, and apoptosis in renal tubular cells, 45 Furthermore, NNMT has been shown to play a significant role in renal interstitial fibrosis through injury, inflammation, and apoptosis of renal tubular cells, 46 which is partially consistent with the results of our study.
A recent study demonstrated that upregulation of NNMT in a UUO model can mitigate the extent of renal tubular interstitial fibrosis, suggesting that NNMT is a potential molecular target for the treatment of renal fibrosis. 18 The main mechanism involved the metabolites of NNMT. 21 However, another study found that overexpression of NNMT in the kidney may dysregulate NAD+ and methionine metabolism, ultimately leading to renal fibrosis. 21 Faced with different research outcomes, we explored the relationship between NNMT and kidney disease from a completely new angle. We confirmed that NNMT was positively correlated with tubulointerstitial fibrosis, and high expression of NNMT was a risk factor for CKD patients. Furthermore, a study found that Qian Yang Yu Yin protected hypertensive rats from hypertension-induced kidney injury by inhibiting NNMT, 47 which provides further support for our conclusion. The underlying pathophysiological mechanism of NNMT in renal fibrosis still needs to be investigated. Nevertheless, the aforementioned evidence suggests that NNMT plays a crucial role in fibrosis. The complex role of NNMT in disease and physiology, along with its tissue specificity, makes it an attractive target for drug development.
This study has limitations. Retrospective studies analyzing large public databases have limitations in providing continuous follow-up data that can offer timely understanding of real-world practice advantages. Additionally, investigating the role of NNMT by using pharmacological inhibition of NNMT in vivo may provide more compelling evidence, particularly through studies involving gene-deficient mice. Finally, we did not explore the potential mechanisms of NNMT in CKD. Future studies should delve into the detailed mechanisms underlying the relationship between NNMT and CKD.
Conclusion
NNMT is predominantly expressed in tubular renal compartments, upregulated in CKD patients, and associated with the degree of renal tubular fibrosis. It has the potential to serve as a biomarker for CKD progression, with its mechanism of action specifically related to tubulointerstitial fibrosis. | 2023-08-09T15:13:47.285Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "d05cb2a6fd21d022dce8c3bc03c5bbc818593c03",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "99a12ff1a5367a1407aa9ed9d907dd50555fbec3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
67058117 | pes2o/s2orc | v3-fos-license | Requirement for Cyclic AMP/Protein Kinase A-Dependent Canonical NFκB Signaling in the Adjuvant Action of Cholera Toxin and Its Non-toxic Derivative mmCT
Cholera toxin (CT) is widely used as an effective adjuvant in experimental immunology for inducing mucosal immune responses; yet its mechanisms of adjuvant action remain incompletely defined. Here, we demonstrate that mice lacking NFκB, compared to wild-type (WT) mice, had a 90% reduction in their systemic and mucosal immune responses to oral immunization with a model protein antigen [Ovalbumin (OVA)] given together with CT. Further, NFκB−/− mouse dendritic cells (DCs) stimulated in vitro with CT showed reduced expression of MHCII and co-stimulatory molecules, such as CD80 and CD86, as well as of IL-1β, and other pro-inflammatory cytokines compared to WT DCs. Using a human monocyte cell line THP1 with an NFκB activation reporter system, we show that CT induced NFκB signaling in human monocytes, and that inhibition of the cyclic AMP—protein kinase A (cAMP-PKA) pathway abrogated the activation and nuclear translocation of NFκB. In a human monocyte-CD4+ T cell co-culture system we further show that the strong Th17 response induced by CT treatment of monocytes was abolished by blocking the classical but not the alternative NFκB signaling pathway of monocytes. Our results indicate that activation of classical (canonical) NFκB pathway signaling in antigen-presenting cells (APCs) by CT is important for CT's adjuvant enhancement of Th17 responses. Similar findings were obtained using the almost completely detoxified mmCT mutant protein as adjuvant. Altogether, our results demonstrate that activation of the classical NFκB signal transduction pathway in APCs is important for the adjuvant action of both CT and mmCT.
INTRODUCTION
Cholera toxin (CT) is a potent enterotoxin produced by Vibrio cholerae bacteria that, through its action on the intestinal epithelium in infected individuals, can cause the severe, often life-threatening diarrhea and fluid loss characteristic of cholera disease (1). CT is also a potent mucosal vaccine adjuvant that has been used extensively in experimental immunology (1,2). However, in contrast to its enterotoxic activity which has been mechanistically well-defined, the signal transduction pathways through which CT exerts its strong adjuvant action remain incompletely understood. The lack of safe effective mucosal adjuvants is generally held as a main barrier for the development of a wider range of mucosal vaccines than the handful currently available, especially vaccines based on purified antigens (2). Understanding the molecular mechanisms of the adjuvant action of CT, which is generally held as the "gold standard" mucosal adjuvant, could clearly guide current efforts to develop alternative, non-toxic mucosal vaccine adjuvants for human use (3,4).
NFκB signaling is an important component of the immune system (20) involving multiple homodimeric or heterodimeric NFκB/Rel protein family members: p50/NFκB1, p52/NFκB2, p65/RelA, RelB, and c-Rel. The generation of an innate immune response via NFκB signaling occurs largely at the level of APCs, usually through the interaction between PAMPs (pathogen-associated molecular patterns) and membrane-bound or cytosolic PRRs (pattern recognition receptors) (21)(22)(23)(24), leading to NFκB activation and translocation into the cell nucleus and subsequent NFκB-dependent increased expression of cytokines, chemokines and adhesion molecules important for APC activation and induction of the adaptive immune response. NFκB signal transduction mechanisms can be classified into the canonical (classical) or the alternative (non-classical) pathways. The canonical NFκB pathway is activated in cells in response to pro-inflammatory stimuli, such as LPS, TNF, or CD40L (25,26), leading to activation of IKK (Inhibitor of Kappa B Kinase) complex, NFκB heterodimer p50-RelA (p65) release and nuclear translocation, DNA binding, and increased transcription of NFκB responsive elements. The alternative pathway, on the other hand, is activated by members of the TNF-receptor superfamily, such as the lymphotoxin receptor, B-cell activating factor, and CD40, and is dependent on the induction of NIK (NF-Kappa-B-Inducing Kinase) signaling, leading to release and nuclear translocation of mainly p52-RelB dimers (27).
The role, if any of NFκB signaling for the adjuvant action of CT is not well-understood. Earlier work reported that CT induces translocation of NFκB into the nucleus of both dendritic and intestinal epithelial cells, suggesting that NFκB signaling may be important in the adjuvant action of CT (28,29). However, it remains to be determined whether the CT-induced nuclear translocation of NFκB in APCs will activate downstream functional pro-inflammatory NFκB signaling; whether this is mediated through a CT-induced activation of the cAMP-PKA pathway; and to which extent NFκB signaling is responsible for CT's adjuvant effect.
Here, we examine the role of NFκB in the adjuvant action of CT. Using studies of both murine and human APCs in vitro and immunization of NFκB −/− as compared to wild-type mice in vivo, we demonstrate a strong, almost total dependence on NFκB signaling for CT's adjuvanticity. We further show that activation of NFκB by CT goes through the cAMP-PKA pathway; that the adjuvant effect is mediated via the classical, and not the alternative NFκB signaling pathway in APCs; and that CTinduced NFκB signaling is important in the expression of IL-1β, the key adjuvant cytokine for subsequent T cells activation. Since CT is too toxic for use as a vaccine adjuvant in humans, we also investigated the role of NFκB for the adjuvant activity on APCs of mmCT (multiple mutated CT), a recently developed nontoxic, yet adjuvant-active CT derivative generated by introducing multiple mutations in the toxic-active A subunit (30).
Adjuvants, Antigens, Polyclonal Stimulus, Protein Inhibitors
Purified cholera toxin (CT) was purchased from List Biological Laboratories, and mmCT, a non-toxic adjuvant-active derivative of CT, was prepared and purified in-house (30). The endotoxin contents determined by the Limulus assay were very low, 7.4 EU/mg protein for CT and 3.6 EU/mg protein for mmCT (13). Ovalbumin (OVA grade V; Sigma) was used as antigen for mice immunizations. Staphylococcal enterotoxin B (SEB; Sigma-Aldrich) was used as a superantigen polyclonal stimulus. Specific protein inhibitors used were H-89 (Sigma), a PKA inhibitor; caffeic acid phenethyl ester (CAPE, Sigma), a specific NFκB inhibitor; and aspirin, a COX-inhibitor.
Mice
Female C57Bl/6 (B6) and NFκB p50 −/− mice [purchased from JAX Laboratories (31)], 6-8 weeks old when used for experiments, were housed under specific-pathogen-free conditions. All treatments and procedures were performed in accordance with the Swedish Animal Welfare Act (1988:534) and the Animal Welfare Ordinance (1988:539). The study was approved by the Ethical Committee for Laboratory Animals in Gothenburg, Sweden (Ethical permit number 56/13).
Immunization and Collection of Specimens
Immunization of mice and collection and preparation of specimens for immunological assays were performed as previously described (32). Briefly, mice received two intragastric doses at an interval of 10 days of 1 mg OVA given alone or supplemented with 10 µg CT. Venous blood, small intestinal tissue and fecal pellets were collected 1 day before the first immunization and again at the time for sacrifice 10-12 days after the last immunization. Sera were prepared by removing cells from the blood samples by centrifugation, and stored at −20 • C until analyzed. Fecal extracts were prepared by emulsifying five fecal pellets from each mouse in 500 µl of ice-cold PBS containing 0.1 mg/ml of soybean trypsin inhibitor (STI), 1% (w/v) bovine serum albumin (BSA, Sigma Aldrich), 25 mM ethylenediaminetetraacetic acid (EDTA), 0.035 mg/ml Pefabloc (Coatech AB) in PBS mixed 50-50% (v/v) with glycerol. Debris was removed by centrifugation (16,000 × g, 10 min, 4 • C) and the supernatants were stored at −80 • C until analyzed. Intestinal tissue was obtained by PERFEXT method (32). Briefly, the mice were perfused with 0.1% heparin-PBS solution immediately after sacrifice, followed by excision of ca 3-cm of the uppermost small intestine which was weighed before storage at −20 • C in a PBS solution (1 ml per g of tissue) containing 2 mM phenylmethylsulfonyl fluoride, 0.1 mg/ml trypsin inhibitor from soybean (Sigma Chemical Co.), and 0.05 M EDTA. At the time for analysis, the samples were thawed, ice-cold saponin (Sigma) was added to a final concentration of 2% (wt/vol) to permeabilize cell membranes, and they were vortex-homogenized and kept at 4 • C overnight. The tissue debris was spun down at 16,000 × g for 10 min, and the supernatant (referred to as intestinal tissue extract) was analyzed for antibody content by ELISA.
Human APCs and T Cells
Peripheral blood mononuclear cells (PBMCs), CD14 + monocytes and CD4 + T cells were prepared from buffy coats of healthy human blood donors as previously described (13). DCs were purified from PBMCs using the "Blood Dendritic Cell Isolation Kit II" (Miltenyi Biotec), according to the manufacturer's protocol. Cells were maintained at 37 • C with 5% CO 2 , in DMEM-F12 complete medium (Life Technologies) supplemented with 1% gentamicin (Sigma-Aldrich; 50 mg/ml) and 5% human AB + serum (Sahlgrenska University Hospital blood bank).
Monocyte Cell Lines
THP1 cells and the THP1 Blue−NFκB monocyte cell line, carrying a stable integrated NFκB-inducible Secreted Embryonic Alkaline Phosphatase (SEAP) reporter construct used to analyze NFκB induction, were purchased from InvivoGen. The THP1 cells were maintained in supplemented RPMI medium (10% fetal bovine serum, 1% gentamycin, and 1% b-mercaptoethanol), and the THP1 Blue−NFκB cell line was maintained in the same medium supplemented with 100 µg/ml normicin (InvivoGen) and 100 U/ml-100 µg/ml pen-strep (InvivoGen). Cell handling and preparation were performed in accordance with the manufacturer's protocol (InvivoGen).
Cell Treatments
Monocytes or Primary DCs-T Cells Co-culture CD14 + monocytes (5 × 10 4 in 200 µl/well) or total purified DCs (1 × 10 4 in 200 µl/well) were stimulated with 1 µg/ml of CT or mmCT, or left untreated for 16 h in 96-well round bottom plates. When used in co-culture experiments with CD4 + T cells, the treated or untreated monocytes or DCs, after 3 washes with PBS, were then mixed with autologous CD4 + T cells (5 × 10 4 monocytes or 1 × 10 4 DCs and 1 × 10 5 of autologous CD4 + T cells in 200 µl per well) together with SEB superantigen (10 ng/ml) and the cell mixture cultured for 3 days. Culture supernatants were then collected, and IL-17A cytokine levels were measured using an ELISA kit (Invitrogen). Control experiments using Polymyxin for inhibition of endotoxins demonstrated that the very low levels of endotoxin in CT and mmCT preparations used did not contribute to the cellular effects of these proteins (13).
For inhibition experiments, monocytes or DCs were treated with 20 µM H-89 or 20 µM CAPE added 1 h prior to the subsequent 16 h treatment with adjuvants.
For testing specific gene expression inhibition by small interfering RNAs (siRNAs), siRNAs with specificity for the RELA and RELB genes, respectively, and negative control ALL STAR siRNA were purchased from Qiagen. The siRNAs were diluted to a final concentration of 25 nM in culture medium without serum. HiPerFect Transfect reagent (Qiagen) was added according to the manufacturer's instructions and incubated for 10 min at 25 • C for complex formation. The reagent mixture was then added to pre-seeded CD14 + cells, which were then transfected for 24 h at 37 • C with 5% CO 2 . Cells were washed 3 times with PBS and then further incubated with 1 µg/ml CT or PBS for 16 h before further co-cultured with CD4 + T cells and analyzed for IL-17A production as described above.
THP1 Blue−NFκB cells. THP1 Blue−NFκB cells (1 × 10 5 /well) were treated for 16 h with 1 µg/ml of CT or mmCT or 1 mM of the cAMP analog dcAMP or left untreated in cell culture medium in 96-well plates. Inhibition of PKA was tested by adding 20 µM H-89 1 h prior to the treatment with adjuvants. After incubation for 16 h, the cells were centrifuged at 350 × g for 5 min, and 20 µl of the cell supernatant was mixed with 180 µl pre-warmed SEAP detection reagent QUANTI-Blue (InvivoGen). After further incubation for 3 h at cell culture conditions, the levels of NFκB-induced SEAP were measured in a spectrophotometer at 620 nm.
RNA Extraction, Sequencing, and Bioinformatics Analysis
Purified murine BMDCs (1 × 10 6 /ml) were left untreated or treated with 5 µg/ml of OVA given alone or with 1 µg/ml of CT for 2, 4, 16 h, washed three times with PBS, and stored at-70 • C. Total RNA was extracted by RNeasy Mini-Kit (Qiagen), and was sent to Technology Center for Genomics & Bioinformatics, University of California, Los Angeles for cDNA library preparation (InteGenX Apollo 324 System) and sequencing using Illumina HiSeq 2000 sequencing system. Each sample generated a total of 80 to 100 million paired-end reads of 100 bp each.
TrimGalore!, version 0.3.5, was used to trim raw RNA-seq reads with the following criteria: quality cut-off of Q30, Illumina adapter trimming, and removal of reads that are <30 bp and that are left unpaired. Reads were aligned with the reference genome using STAR software, and the aligned sequence reads were subsequently processed using SAMtools. In the end, a total of 75-105 million reads per sample was generated. To quantify gene expression, Htseq-count was used to tally the number of reads mapped to exonic regions of the genome. Transcript read counts that showed more than 2-fold difference between untreated and treated samples were then analyzed for function enrichment using Gene Ontology Biological Process category of DAVID Bioinformatics.
ELISA Analysis
Serum and intestinal-mucosal antibody responses were determined by ELISA. High binding ELISA trays (Greiner) were coated overnight at 4 • C with 1 µg/ml of OVA. Plates were washed 3 times and then blocked with 1% BSA for 1 h to minimize unspecific binding. Samples and a known sample used as a standard were included in each plates and titrated by 3-fold serial dilutions. Plates for IgG analysis were incubated for 90 min at room temperature and those for IgA determination for 4 h at 37 • C. All plates were washed twice with 0.05% (v/v) Tween 20 in PBS and once with PBS. HRP-conjugated goat-anti-mouse IgG was added to the plates with serum samples and goat-anti-mouse IgA-HRP (Southern Biotech) to the plates with fecal, or small intestine extracts. The plates were incubated at 4 • C overnight and after twice washing then developed with OPD for 20 min at which time the enzyme reaction was stopped with H 2 SO 4 and absorbance values analyzed at 490 nm. Endpoint titers were determined as the extrapolated sample dilution giving an absorbance value of 0.4 above the no-sample background.
Western Blot Analysis
THP1 monocytes cells (2 × 10 7 /5 ml) were left untreated or treated for 4 h with 1 µg/ml of CT at 37C with 5% CO 2 . Cells were harvested on ice and cytoplasmic and nuclear fractions were separated by using NE_PER Kit according to the manufacturer's instructions (NE_PER Thermo Scientific). The reagents were supplemented with protease inhibitors (Thermo Scientific). Total protein concentration was measured with a BCA Protein Assay Kit (Pierce). 10 µg of protein were denaturated in reducing sample buffer (NuPAGE LDS 4×; Novex R , Life Technologies) with addition of 2.5% β-mercaptoethanol (Sigma-Aldrich) and heated at 70 • C for 10 min. Samples were separated by 4-12% Bis-Tris Gel SDS-PAGE (NuPage gels Novex R , Life Technologies) and then transferred onto a nitrocellulose transfer membrane (Millipore). After blocking with 5% non-fat milk in Tris-buffered saline (TBS) (150 mM NaCl, 3 mM EDTA, 50 mM Tris-HCl, pH 8.0) for 2 h, the membrane was thereafter immunoblotted using anti-p65 rabbit polyclonal antibody (Abcam), anti β-actin antibody (Cell Signaling-cytoplasmic housekeeping protein) and an anti-TBP antibody (Cell Signaling-nuclear housekeeping protein) at O/N 4 • C. The membrane was then washed three times with TBST buffer (150 mM NaCl, 3 mM EDTA, 0.1% Tween-20, 50 mM Tris-HCl, pH 8.0) and incubated with horseradish peroxidase (HRP)-conjugated goat anti-rabbit antibody (Jackson ImmunoResearch) for 1 h at RT. After washing with TBST 2 times and with TBS 1 time, proteins were then visualized using the sensitive ECL Detection System (Pierce) according to the manufacturer's instructions.
FACS Analysis
For flow cytometric analysis, mBMDCs (1 × 10 6 /ml) were left incubated with or without 1 µg/ml of CT or mmCT for 16 h. Cells were then washed and stained with the following murine antibodies: anti-CD11c BV711, anti-CD80 FITC, anti-CD86 APC (BD Biosciences), and anti-I-A/I-E Pacific Blue (BioLegend). After staining the cells were fixed in 4% paraformaldehyde and analyzed with an LSRII Flow Cytometer (BD Biosciences), and data were then analyzed with FlowJo software (Tree Star).
For intracellular staining of human IL-1β, PBMCs (2 × 10 6 /2 ml) were incubated with 1 µg/ml of CT or mmCT or medium only for 16 h, with or without prior addition of 20 µM CAPE, and the cells were then treated with brefeldin A (3 mg/ml; BD Biosciences) for another 4 h. Cells were washed, treated with AmCyan Live/Dead staining (Invitrogen), and then surfacestained with anti-CD4 A700, anti-CD3PerCP, and anti-CD14 FITC (BD Biosciences). After fixation and permeabilization with Cytofix/Cytoperm solution (BD Biosciences), cells were then finally stained with anti-IL-1β PE (BD Biosciences), washed and resuspended in FACS buffer prior to flow cytometric analysis.
RT-PCR Assay
BMDCs (1 × 10 6 /ml) from B6 control mice and NFκB −/− mice were left untreated or treated with 1 µg/ml of CT or mmCT for 16 h at 37 • C in 5% CO 2 . Total RNA was extracted using the RNeasy Mini-Kit (Qiagen) and cDNA generated from 0.5 µg of total RNA using QuantiTect Reverse Transcription Kit (Qiagen). Customized quantitative real-time PCR was performed (SABiosciences) following the manufacturer's instructions. The data were normalized to Hypoxanthine Phosphoribosyltransferase 1 (HPRT) gene expression and analyzed using a web-based software package for the PCR array system (SABiosciences).
NFκB Signaling Is Important for the in vivo Adjuvant Effect of CT in Mice
To examine the role of NFκB signaling on the adjuvant properties of CT in vivo, serum and intestinal-mucosal antibody responses were determined in NFκB −/− and B6 WT control mice which were immunized with either OVA alone or OVA plus CT. As expected, there was a strong enhancement of both serum IgG and fecal and intestinal IgA anti-OVA responses in WT mice after immunization with OVA plus CT as compared to immunization with OVA alone (which latter in its turn increased anti-OVA serum IgG titers ca 10-fold above the pre-immunization background levels but did not significantly increase the fecal anti-OVA IgA levels, data not shown). In contrast, in the similarly immunized NFκB −/− mice, the CT-induced enhancement was essentially lacking, being suppressed by ≥90% in comparison to the responses in WT mice (Figure 1). The results indicate that the adjuvant effect of CT on both mucosal and systemic humoral immune responses in mice is dependent on NFκB signaling.
NFκB Signaling in Mouse DCs Is Upregulated by CT and Is Important for DC Activation and Stimulation of T Cells
The primary adjuvant action of CT appears to be to promote activation and antigen presenting capacity of DCs and other APCs (5,33,34). Transcriptomic analyses of BMDCs from WT mice exposed for different time periods to either OVA plus CT or for comparisons to OVA alone demonstrated that the transcripts for a large number of cytokines and other immunological activation markers were strongly upregulated by CT (Supplementary Figure S1). The levels of transcripts were usually higher after incubation for 16 h as compared to 2 or 4 h, but in some cases, most notably for IL-1β, IL-12, and CD83, the maximal gene expression occurred at the earlier time-points and had declined at 16 h. Among the genes that were upregulated in the CT-treated cells there was an especially strong increase in the IL-1β transcript level at 4 h, 22-fold for OVA + CT treated cells and >4-fold in OVA only treated cells; this agrees with previous studies demonstrating increased expression of this cytokine in CT-treated APCs and its important role for CT's adjuvant function (13,(33)(34)(35)(36). In contrast, although the IL-12 transcript levels were slightly (<2-fold) elevated at 2 and 4 h after OVA+CT treatment they did not differ from those after OVA only treatment and had essentially disappeared at 16 h, in accordance with previous reports that IL-12 expression is not specifically increased and may even be suppressed by CT (16,37,38).
Many of the CT-enhanced immune genes, e.g., IL-1α, IL-1β, CD80, and IL-6 are under NFκB regulation (39). Consistent with this and a previous report of CT-induced NFκB translocation to the nucleus of murine APCs in vitro (28), our transcriptomic analyses showed that treatment of murine DCs with CT promoted upregulation of gene sets associated with translocation of NFκB to the nucleus, effects that were prominent at both 4 h and at 16 h (Supplementary Figure S2).
To more directly examine the role of NFκB signaling in the activation of DCs by CT, CT-treated BMDCs from WT and NFκB −/− mice were examined by RT-PCR to analyze gene expression for various cytokines and other immune-associated molecules. Consistent with our initial transcriptomic findings using WT DCs, the mRNA expression for IL-1α, IL-1β, IL-6, and IL-23 cytokines as well as for CD40, CD80 and CD86 surface co-stimulatory molecules were significantly increased in WT BMDCs treated with CT as compared to untreated, whereas they were enhanced to a much lower extent if at all in the NFκB −/− BMDCs. Other examined genes, such as those for IL-10, BAFF, and MMP11 were not significantly increased by CT in either WT or NFκB −/− BMDCs (Figure 2A). Further analyses by FACS supported that the CT-treated WT BMDCs had strongly increased expression of CD80 and CD86 as well as of MHCII on the cell surface, whilst the expression of these molecules on NFκB −/− DCs was much lower and only modestly increased compared to the levels in untreated cells ( Figure 2B). Thus, our data suggest that the CT-induced upregulation in BMDCs of many co-stimulatory molecules and pro-inflammatory cytokines associated with the adjuvant action of CT in mice is dependent on CT-induced activation of NFκB signaling.
NFκB Signaling Is Also Required for the Adjuvant Action of CT on Human Immune Cells
Our attention next turned to examining the role of NFκB signaling in the adjuvant action of CT on human APCs. This was based on two main reasons. One was to learn whether our findings in mice would extend to humans, at least as testable on human APCs in vitro. Another reason was that while CT exhibits strong anti-proliferative effect on murine T cells which prohibits in vitro studies of CT-induced T cell activation in murine systems (40), this effect does not extend to human T cells, whose activation by CT-treated antigen-exposed human APCs can therefore easily be examined (13).
We tested the effect of CT treatment on NFκB induction using a monocyte cell line (THP1 Blue−NFκB ) equipped with NFκB reporter system. Treatment of THP1 Blue−NFκB cells with CT resulted in very clear NFκB activation relative to untreated cells ( Figure 3A). We also determined the translocation of canonical NFκB p65 from the cytosol to the nucleus in CTtreated THP1 cells. As shown in Figure 3B, cytoplasmic p65 was reduced at 4 h in CT-treated as compared to untreated cells whilst the nuclear amount of p65 protein was increased. This data demonstrates that CT treatment of human monocytes results in activation and nuclear translocation of NFκB canonical pathway.
We examined whether NFκB signaling is required for the adjuvant action of CT on primary human APCs using a previously established co-culture system: purified human blood monocytes or DCs were incubated with CT or medium, and then after thorough washing the APCs were co-cultured with autologous CD4 + T cells in the presence of SEB superantigen, where after the levels of IL-17A, the predominant T cell cytokine increased by CT treatment of human APCs, were measured (13). In the present study, monocytes as well as DCs purified from human peripheral blood were either pre-treated with CAPE, a specific NFκB protein inhibitor, or left untreated, or as a further control treated were treated with Aspirin (a COX protein inhibitor) prior to the addition of CT or medium alone and the standard following procedures. The results show that while Th17 responses were significantly enhanced using CT-treated DCs or monocytes, they were significantly reduced when the CT-treated APCs had been pre-treated with the specific NFκB inhibitor (Figures 4A,B) but not with the control (COX) inhibitor ( Figure 4C). The results support the importance of NFκB signaling for adjuvant effect of CT on human monocytes or DCs.
The Adjuvanticity of CT Involves the Canonical, and Not the Alternative Pathway of NFκB Signaling
The NFκB signal induced by CT in THP1 Blue−NFκB demonstrates that CT stimulates classical/canonical NFκB signaling. However, NFκB signaling can also be mediated via alternative pathways (41). To examine whether either or both NFκB pathways are involved in the adjuvant action of CT, we undertook a modified monocyte-CD4 + T cell co-culture experiment. In this system, purified CD14 + monocytes were first transfected with silencing RNA (siRNA) specific for RelA involved in the canonical pathway or RelB involved in the alternative pathway, or with negative control siRNA (All-star siRNA) before being treated with CT. After washing, the monocytes were then cocultured with purified CD4 + T cells together with SEB, and Th17 responses were measured. As expected, treatment of monocytes with the control siRNA did not interfere with the CTinduced enhancement of the IL-17A response (Figures 4D,E). Treatment with RelA-specific-siRNA ( Figure 4D), but not with RelB-specific-siRNA (Figure 4E), on the other hand resulted in significant decrease of the CT-mediated IL-17A response. These findings suggest that activation of the canonical NFκB pathway is the main signal transduction mechanism involved in the adjuvant action of CT.
CT-Induced NFκB Activation Is Mediated by cAMP-PKA Signaling
Our previous work has demonstrated that the Th17-promoting adjuvant effect of CT on human cells in vitro involves cAMP-PKA signaling in monocytes and other APCs (13). Given the critical role of cAMP-PKA signaling and, as shown here, also NFκB signaling in the adjuvant action of CT, we investigated whether the activation of NFκB in CT-stimulated human monocytes is dependent on cAMP-PKA signaling. Treatment of THP1 Blue−NFκB cells with a cAMP analog (dcAMP) resulted in strong activation of NFκB signaling that is comparable in magnitude to that induced by CT ( Figure 5A). Furthermore, treatment of the THP1 Blue−NFκB cells with a competitive inhibitor of cAMP-dependent PKA, H-89, prior to addition of CT abrogated the CT-induced NFκB activation ( Figure 5B). These data support that NFκB activation by CT in human monocytes is dependent on PKA-cAMP signaling.
CT-Induced Activation of NFκB in APCs
Promotes IL-1 Signaling IL-1 signaling by APCs has been found to be critical for the increase in Th17 responses by CT (36,(42)(43)(44). We have previously shown that inhibition of IL-1 signaling in human monocytes abrogated the Th17-promoting adjuvant effect of CT (13). To investigate whether the stimulation of IL-1 signaling in APCs by CT is dependent on NFκB, monocytes were treated with CT in presence or absence of the CAPE NFκB inhibitor, and intracellular IL-1β expression was then Immunochemical evidence for CT-induced NFκB translocation into the nucleus. THP1 cells were incubated with or without 1 µg/ml CT for 4 h and proteins from cytosolic and nuclear fractions were separated on SDS-PAGE and subsequently immunoblotted with an anti-p65 antibody; β-actin immunoblotted with an anti-β-actin antibody served as a control housekeeping protein for the cytoplasm and immunoblotted TBP as a control housekeeping protein for the nucleus. Normalized values show the percentage ratios of p65 protein in relation to the housekeeping proteins after CT treatment as compared to a set value of 100% for unstimulated cells (NS). Bars show mean values plus SEMs for CT-treated and unstimulated (NS) cells tested in triplicates; * defines a statistically significant difference at p < 0.05.
FIGURE 4 |
The canonical but not the alternative NFκB pathway in APCs is activated by CT and involved in its adjuvant activity. Purified human CD14 + monocytes (A,C-E) or DCs (B) were incubated for 1 h with the NFκB specific inhibitor CAPE (A,B), or as a control with a COX inhibitor (Aspirin) (C), or for 24 h with siRNAs specific for RelA involved in the canonical (classical) NFκB pathway (D), All-Star Control (D, E), or RelB involved in the alternative pathway (E). Cells were then further treated for 16 h with 1 µg/ml CT or medium and then washed and co-cultured for 3 days with autologous CD4 + T cells plus SEB. Three separate experiments were performed, each including separate tests on cells from 3 to 5 individuals, and the data shown are the mean values plus SEMs of IL-17A levels in culture supernatants from all individuals measured by ELISA. *represents p < 0.05 for compared values. measured by flow cytometry. Consistent with our previous findings (13), CT induced strong upregulation of IL-1β in human monocytes, which was almost completely abrogated in cells pretreated with CAPE (Figures 5C,D). These findings demonstrate that the CT-induced increase in IL-1β signaling in APCs is strongly NFκB-dependent.
NFκB Signaling Is Also Required for the Adjuvant Activity of mmCT
The toxicity of CT precludes its use as a vaccine adjuvant in humans, whereas the mutant molecule mmCT lacks detectable enterotoxicity and still has potent adjuvant activity (30). A series of experiment were performed to determine whether mmCT would display similar dependence on NFκB signaling for its adjuvant activity as demonstrated for CT in this study. First, gene expression analysis by RT-PCR on BMDCs from WT and NFκB −/− mice treated with mmCT demonstrated a strong NFκB dependence for mmCTinduced transcription of both co-stimulatory molecules CD80 and CD86 and pro-inflammatory cytokines IL-1α, IL-1β, and IL-6 ( Figure 6A). This was confirmed by flow cytometry analysis of mmCT-treated BMDCs that revealed reduced expression of CD80, CD86 as well as MHCII in NFκB −/− BMDCs compared to the levels induced in WT BMDCs (data not shown).
The NFκB-dependence of mmCT's adjuvant activity was also demonstrated using human APCs. Treatment of THP1 Blue−NFκB cells with mmCT showed clear evidence of NFκB expression ( Figure 6B). Further, pre-treatment of THP1 Blue−NFκB cells with the PKA inhibitor H-89 before the addition of mmCT resulted in abrogation of NFκB activation (Figure 6C), thus indicating a similar cAMP-PKA dependence of the mmCT-induced NFκB activation as seen with CT.
Moreover, co-culturing mmCT-treated monocytes with CD4 + T cells together with SEB showed that mmCT, similar to CT, induced a strongly enhanced Th17 response, which was abolished when the monocytes had been pre-treated with the NFκB inhibitor CAPE before the mmCT addition (Figures 6D,E). Likewise, intracellular IL-1β expression by human monocytes measured by flow cytometry, which was significantly increased by mmCT-treatment, was significantly reduced in mmCT-treated cells that had been pre-treated with CAPE, indicating that similar to CT, mmCT-induced IL-1β expression is dependent on NFκB signaling (Figures 6F,G).
Altogether, these data support and extend our previous work indicating that mmCT, despite its lack of detectable enterotoxicity and having 1,000-fold reduced ability to induce cAMP in target cells compared to CT, displays close similarity to CT with regard to its molecular mechanism of action. Both CT and mmCT induces NFκB signaling via a cAMP-PKA-dependent pathway, and the activation of NFκB leads to IL1β-dependent promotion of Th17 (and other cellular) responses.
DISCUSSION
This study identifies NFκB signaling as a key molecular pathway in the adjuvant action of both CT and the mutant CT derivative, mmCT. The latter molecule, despite its potent NFκB-inducing adjuvant activity, has no detectable enterotoxic activity, and should therefore, in contrast to CT, be possible to use as an adjuvant in humans. In vivo studies in WT and NFκB −/− mice demonstrated that after oral immunization with a model protein (OVA) together with or without CT adjuvant, the lack of NFκB was associated with a >90% reduction in the capacity of CT to enhance OVA-specific mucosal IgA as well as systemic IgG responses. This was associated with a complete or marked reduction of the CT-induced increased gene expression for various immunostimulatory cytokines (IL-1α, IL-1β, IL-6, and IL-23) and co-stimulatory molecules (CD40, CD80, CD86) in NFκB −/− BMDCs relative to WT. Since the p50 mutation in Frontiers in Immunology | www.frontiersin.org NFκB −/− induces multifocal defects in the immune response (31) whereas CT is known to almost exclusively exert its adjuvant effect through activation of APCs, the pronounced reduction of immunostimulatory cytokines and co-stimulator molecules in NFκB −/− DCs supports that the poor immune responses in vivo largely, if not exclusively reflect impaired APC activation by CT. An important role for NFκB signaling in APCs for the adjuvant action of CT was also found when human immune cells were examined. In addition to demonstrating that the findings in mice extend to humans, at least as can be tested using human APCs in vitro, the consistent strong dependence on NFκB signaling for CT's adjuvant effects also on human APCs from multiple blood donors practically rules out that the effects observed to any significant degree were dependent on genetic or environmental factors, such as e.g., diet or microbiota.
In a human monocyte cell line THP1 Blue−NFκB with an inbuilt NFκB reporter system, CT increased NFκB expression as well as the translocation of NFκB into the nucleus. The functional significance of CT-induced NFκB signaling in human APCs for the adjuvant activity was indicated by a practically complete abrogation of CT's ability to promote SEB-induced T cell (Th17) responses when the NFκB signaling in the APCs, whether in monocytes or isolated DCs, was abolished by either a specific molecular inhibitor (CAPE) or siRNA. The requirement for NFκB signaling by CT is evidently restricted to canonical signaling, since siRNA inhibition of RelA but not of RelB prevented the enhancement of Th17 responses by CT. Interestingly, it was reported that the breakdown of OVA-induced oral tolerance in mice by oral co-administration of OVA with CT was associated with activation by CT of canonical NFκB pathway in Peyer's patches and mesenteric lymph nodes (45).
We investigated further the relationship between CT-induced cAMP/PKA signaling and NFκB signaling for the adjuvant effect of CT on APCs. Previous work has shown conflicting results reporting that cAMP/PKA signaling can either activate (46,47) or inhibit (48,49) NFκB, suggesting cell type-and/or contextdependent effects of cAMP/PKA signaling on NFκB activity. Our previous work has demonstrated that the predominant Th17-promoting adjuvant effect of CT on human immune cells in vitro is mediated via CT-induced cAMP-PKA signaling in monocytes and other APCs (13). Consistent with this, we demonstrate here that the induction of canonical NFκB signaling by CT appears to be mediated via cAMP/PKA activation. Using the THP1 Blue−NFκB cell line reporter system, we found strong NFκB activation when the cells were treated with a cAMP analog, whereas treatment of THP1 Blue−NFκB cells with a PKA inhibitor prior to addition of CT abolished the signal for NFκB activation. The detailed molecular mechanisms by which CTinduced cAMP/PKA signaling activates NFκB remains to be defined but may involve phosphorylation of RelA. PKA is known to phosphorylate Ser276 of RelA leading to nuclear translocation and increased transcriptional activity of NFκB. Besides Ser276, multiple other phosphorylation sites have been identified in RelA, which can serve as sites for direct or indirect interaction with cAMP-PKA signaling (47).
Importantly, the induction of NFκB signaling by CT in APCs triggers increased expression of IL-1β, an important proinflammatory cytokine for CT's adjuvant function (5,35) and critical for the promotion of Th17 responses (13,14,36). This was clearly demonstrated when CT-stimulated monocytes were pre-treated with the NFκB inhibitor CAPE, in which case both the normal CT-induced increase in intracellular IL-1β and the promotion of Th17 responses in co-cultured CD4 + T cells were abolished.
A similar dependence on NFκB signaling for adjuvant activity as that shown for CT was also found for the practically nontoxic mmCT derivative. We have previously shown that the adjuvant function of mmCT on human APCs similar to CT is dependent on cAMP/PKA signaling (13) even though the cAMP levels induced by mmCT are 1,000-fold reduced compared to those induced by CT (30). We now extend this observation by demonstrating, both in murine and human APCs, that cAMP/PKA dependent NFκB signaling is important for the ability not only of CT but also of mmCT to increase expression of pro-inflammatory cytokines including IL-1β in APCs and, as tested in the human APC-T cell co-culture system, to functionally augment the development of Th17 cell response.
Similar to our previous findings on cytokine production in monocytes and IL-17 production from co-cultured T cells, the levels of NFKB activation and translocation by mmCT resembled those induced by CT, despite the much lower levels of cAMP that are induced by mmCT. Our previous conclusion that the low cAMP levels induced by mmCT are apparently "both sufficient and necessary" for its strong adjuvant effect clearly applies also to the activation of NFKB signaling in APCs by mmCT (13). This, however, does not exclude that there could still be differences between CT and mmCT in the way they may engage other as yet undefined pathways contributing to the adjuvant effect. In this regard, it is noteworthy that there are other enterotoxin derivatives, such as LTK63 and CTA1-DD whose adjuvant activity appears to be independent of cAMP (2,50). When given intranasally to mice also the cholera toxin B subunit which does not induce any cAMP has significant adjuvant activity although less than for CT and mmCT (51,52).
Altogether, as studied in both murine APCs in vitro and a mouse model in vivo as well as in human immune cells, our findings identify an important role of cAMP/PKAdependent canonical NFκB signaling in APCs for the adjuvant activity of both CT and its practically non-toxic derivative mmCT.
DATA AVAILABILITY
The RNA-seq datasets generated for this study can be found under the SRA BioProject ID: PRJNA517420.
ETHICS STATEMENT
The study was approved by the Ethical Committee for Laboratory Animals in Gothenburg, Sweden (Ethical permit number 56/13).
AUTHOR CONTRIBUTIONS
MT, JH, MiL, and MaL conceived and designed the study. MT and MaL performed the experiments and analyzed the data. MT, JH, and MaL wrote the manuscript. All authors read and approved the final version of the manuscript. | 2019-02-19T14:04:18.161Z | 2019-02-19T00:00:00.000 | {
"year": 2019,
"sha1": "74d2ed4eeb1cbffd4a1742ed64c8a9f50b84f58c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.00269/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74d2ed4eeb1cbffd4a1742ed64c8a9f50b84f58c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
224804994 | pes2o/s2orc | v3-fos-license | Calcilytic NPSP795 Increases Plasma Calcium and PTH in an Autosomal Dominant Hypocalcemia Type 1 Mouse Model
ABSTRACT Calcilytics are calcium‐sensing receptor (CaSR) antagonists that reduce the sensitivity of the CaSR to extracellular calcium. Calcilytics have the potential to treat autosomal dominant hypocalcemia type 1 (ADH1), which is caused by germline gain‐of‐function CaSR mutations and leads to symptomatic hypocalcemia, inappropriately low PTH concentrations, and hypercalciuria. To date, only one calcilytic compound, NPSP795, has been evaluated in patients with ADH1: Doses of up to 30 mg per patient have been shown to increase PTH concentrations, but did not significantly alter ionized blood calcium concentrations. The aim of this study was to further investigate NPSP795 for the treatment of ADH1 by undertaking in vitro and in vivo studies involving Nuf mice, which have hypocalcemia in association with a gain‐of‐function CaSR mutation, Leu723Gln. Treatment of HEK293 cells stably expressing the mutant Nuf (Gln723) CaSR with 20nM NPSP795 decreased extracellular Ca2+‐mediated intracellular calcium and phosphorylated ERK responses. An in vivo dose‐ranging study was undertaken by administering a s.c. bolus of NPSP795 at doses ranging from 0 to 30 mg/kg to heterozygous (Casr +/Nuf ) and to homozygous (Casr Nuf/Nuf ) mice, and measuring plasma PTH responses at 30 min postdose. NPSP795 significantly increased plasma PTH concentrations in a dose‐dependent manner with the 30 mg/kg dose causing a maximal (≥10‐fold) rise in PTH. To determine whether NPSP795 can rectify the hypocalcemia of Casr +/Nuf and Casr Nuf/Nuf mice, a submaximal dose (25 mg/kg) was administered, and plasma adjusted‐calcium concentrations measured over a 6‐hour period. NPSP795 significantly increased plasma adjusted‐calcium in Casr +/Nuf mice from 1.87 ± 0.03 mmol/L to 2.16 ± 0.06 mmol/L, and in Casr Nuf/Nuf mice from 1.70 ± 0.03 mmol/L to 1.89 ± 0.05 mmol/L. Our findings show that NPSP795 elicits dose‐dependent increases in PTH and ameliorates the hypocalcemia in an ADH1 mouse model. Thus, calcilytics such as NPSP795 represent a potential targeted therapy for ADH1. © 2020 The Authors. JBMR Plus published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research.
Introduction
A utosomal dominant hypocalcemia (ADH) is a genetically heterogeneous disorder of extracellular calcium (Ca 2+ e ) homeostasis consisting of two reported variants: ADH type 1 (ADH1; OMIM #601198) is caused by germline gain-of-function mutations of the G-protein-coupled calcium-sensing receptor 100,000, (9) and is characterized by hypocalcemia, increased circulating phosphate concentrations, inappropriately low or normal PTH concentrations, and a relative hypercalciuria with urinary calcium-to-creatinine ratios that are within or above the reference range. (1,3,10,11) ADH1 has a substantial burden of illness and causes hypocalcemic symptoms such as paraesthesia, muscle spasms, and seizures in around 50% of patients. (10) Ectopic calcifications are also common in ADH1 and >35% of patients develop basal ganglia calcifications, whereas >10% of patients have nephrocalcinosis. (10) Furthermore, patients with severe forms of ADH1 can develop a Bartter-like syndrome, characterized by hypokalemic alkalosis, renal salt wasting, and hyperreninemic hyperaldosteronism. (12,13) ADH1 has a high unmet clinical need as conventional therapies such as vitamin D analogs (eg, alfacalcidol and calcitriol) and calcium supplements predispose patients with ADH1 to the development of marked hypercalciuria, nephrocalcinosis, nephrolithiasis, and renal failure. (1,10) Recombinant PTH injections have occasionally been used to treat symptomatic forms of ADH1. (14) However, use of this treatment is expensive and limited, as it is administered by s.c. bolus injections or continuous pump infusion, and may not prevent patients with ADH1 from developing hypercalciuric renal complications. (14) Thus, better treatments are required, and antagonists of the CaSR, which are referred to as calcilytics, (15,16) have the potential to act as a targeted therapy for ADH1. To date, all calcilytics are negative allosteric modulators (NAMs) of the CaSR and comprise two main classes of orally active compounds: the amino-alcohols and the quinazolinones. Calcilytics were originally investigated as therapies for osteoporosis, as these compounds transiently stimulated PTH secretion, which had the potential to induce bone anabolic effects. (17) However, clinical trials have shown that calcilytics have a lack of efficacy for postmenopausal osteoporosis, (18,19) but can lead to sustained elevations in serum calcium concentrations in healthy subjects, (20,21) thereby highlighting the potential of CaSR NAMs to treat hypocalcemic disorders such as ADH. In support of this, calcilytics such as NPS 2143, NPSP795, JTT-305/ MK-5442, and AXT914 have been shown to normalize the increased signaling responses associated with ADH1-causing mutant CaSRs in vitro, (22)(23)(24)(25) and the calcilytics NPS 2143 and JTT-305/MK-5442 have also been shown to increase plasma calcium and PTH concentrations in ADH1 mouse models in vivo. (24,26) However, the effectiveness of calcilytics as treatments for patients with ADH1 remains unclear. For example, a phase IIb study involving 5 patients with ADH1 showed that i.v. administration of NPSP795 (an amino-alcohol calcilytic compound) at doses ranging from 5 to 30 mg, increased PTH, but did not significantly alter ionized blood calcium concentrations. (23) We have further evaluated the efficacy of NPSP795 treatment for ADH1 by undertaking in vitro and in vivo studies involving Nuf mice, which have hypocalcemia (Table 1) in association with a germline gain-of-function CaSR mutation, Leu723Gln. Our findings demonstrate that NPSP795 increases PTH in a dose-dependent manner, and that a higher dose (25 mg/kg) than that used in the reported phase IIb study, (23) significantly increases plasma calcium concentrations.
Materials and Methods
Compounds NPSP795, which is also known as SHP635, was provided by NPS/Shire Pharmaceuticals (Lexington, MA, USA) and dissolved in a 20% aqueous solution of 2-hydroxypropyl-β-cyclodextrin (Sigma-Aldrich, St. Louis, MO, USA) prior to use in in vitro and in vivo studies.
Animals
All study mice were littermates aged between 22 to 31 weeks. Mice were kept in accordance with welfare guidance from the UK Government Home Office Department (London, UK) in an environment controlled for light (12-hour light/dark cycle), temperature (21 AE 2 C), and humidity (55% AE 10%) at the Medical Research Council (MRC) Harwell Centre (Oxfordshire, UK). (27) Mice had free access to water (25 ppm chlorine) and were fed ad libitum a commercial diet (RM3; Special Diet Services, Witham, UK) that contained 1.24% calcium, 0.83% phosphorus, and 2948 IU/kg of vitamin D. Nuf mice (MGI ID: MGI:3054788) were maintained on the inbred 102/H strain background (strain ID: C3;102-CasrNuf/H; MGI:5291924), which is a substrain bred at the Mary Lyon Centre (Harwell, UK). (26,28) Animal studies were approved by the MRC Harwell Institute Ethical Review Committee, and were licensed under the Animal (Scientific Procedures) Act 1986, issued by the UK Government Home Office Department (PPL30/2752).
Intracellular calcium measurements
The Ca 2+ i responses of TRex-CaSR-WT and mutant TRex-CaSR-Gln723 cells were assessed by a flow cytometry-based assay, as reported. (2,3) Briefly, 48 hours posttransfection, the cells were harvested, washed in calcium-and magnesium-free Hank's balanced salt solution (HBSS; Invitrogen), and loaded with 1 μg/ mL indo-1-acetoxymethylester (Indo-1 AM) (Molecular Probes, Eugene, OR, USA) for 1 hour at 37 C. (2,3) After the removal of free dye, the cells were resuspended in calcium-and magnesium-free HBSS and maintained at 37 C. TRex-CaSR-WT and mutant TRex-CaSR-Gln723 cells were incubated with either a 20% aqueous solution of 2-hydoxypropyl-β-cyclodextrin (vehicle) or NPSP795 at concentrations of 20 and 40nM for 1 hour, as described. (31) Cells in suspension were stimulated by sequentially adding calcium to increase the Ca 2+ e concentration in a stepwise manner from 0 to 15mM, and then analyzed on a MoFlo modular flow cytometer (Beckman Coulter, Brea, CA, USA) by measurement of Ca 2+ i -bound Indo-1 am (at 410 nm), and free Indo-1 am (at 485 nm), using a JDSU Xcyte UV laser (Coherent, Inc., Santa Clara, CA, USA), on each cell at each Ca 2+ e concentration, as described. (2,3) Cytomation Summit software (Beckman Coulter) was used to determine the peak mean fluorescence ratio of the transient response after each individual stimulus and expressed as a normalized response. (2,3) Concentration-response curves were generated using a four-parameter nonlinear regression curve-fit model (GraphPad Prism; GraphPad Software, Inc., La Jolla, CA, USA) to calculate the half-maximal (EC 50 ) values. (2,3) Phosphorylated and total ERK measurements TRex-CaSR-WT and mutant TRex-CaSR-Gln723 cells were seeded in poly-L-lysine treated 48-well plates and incubated for 24 hours. The following day, the medium was changed to serum-free tetracycline selection medium and incubated for a further 12 hours prior to treatment of cells with 0 to 10mM CaCl 2 . Cells were lysed in Surefire lysis buffer, and AlphaScreen Surefire ERK assays (PerkinElmer, Waltham, MA, USA) measuring phosphorylated and total proteins performed, as described. (6,31) For studies with NPSP795, cells were incubated with either a 20% aqueous solution of 2-hydoxypropyl-β-cyclodextrin (vehicle) or NPSP795 for 4 hours prior to calcium treatment. The fluorescence signal in both assays was measured using the PheraStar FS microplate reader (BMG Labtech, Ortenberg, Germany). (6,31) Fold-change phosphorylated ERK (pERK) responses were expressed as a ratio of pERK to total ERK responses.
In vivo administration of NPSP795 to Nuf mice Mice were randomly allocated to receive NPSP795 or vehicle as a single bolus by s.c. injection. None of the mice had undergone any experimental procedures prior to dosing. Study investigators were blinded during animal handling, and also when undertaking endpoint measurements. The primary experimental outcome was a change in plasma calcium at 1-hour postdose in heterozygous (Casr +/Nuf ) mice. Blood samples were collected from the lateral tail vein following application of topical local anesthesia for measurement of plasma PTH, or collected from the retro-orbital vein under isoflurane terminal anesthesia for measurement of other plasma biochemical parameters. (26,27) Plasma biochemical analyses Plasma was separated by centrifugation at 5000g for 10 min at 8 C, and analyzed for calcium, albumin, phosphate, urea, and creatinine on a Beckman Coulter AU680 analyzer, as described. (26) Plasma calcium was adjusted for variations in albumin concentrations using the formula: (plasma calcium (mmol/L) -([plasma albumin [g/L] -30] × 0.02], as reported. (27) Plasma PTH concentrations were determined using an ELISA for mouse intact PTH (Immutopics, Inc., San Clemente, CA, USA). (26) Fold-change PTH responses are expressed as a ratio of plasma PTH concentrations of NPSP795-treated mice to the mean plasma PTH concentrations of respective vehicletreated mice.
Statistical analyses
All in vitro studies were performed in four biological replicates. For the in vitro measurement of Ca 2+ i responses, statistical comparisons were undertaken using the F test. (2,3) Fold-change pERK responses were analyzed by two-way ANOVA with Tukey's multiple-comparisons test. Mouse sample size calculations were undertaken using G*Power statistical software. The unit of analysis was a single mouse. A sample size of n = 5 mice allocated to the treatment and control groups provided >80% power to detect a >15% increase in plasma calcium concentrations. Biochemical parameters were analyzed by one-way ANOVA with Sidak's multiple-comparisons test. All analyses were undertaken using GraphPad Prism (GraphPad), and a value of p < 0.05 was considered significant. All data are shown as mean AE SEM.
Results
Effect of NPSP795 on the signaling responses of cells expressing the gain-of-function mutant Gln723 CaSR To investigate the effect of NPSP795 on CaSR signal transduction, we established cells stably expressing WT CaSR or the Nuf mutant Gln723 CaSR, using the TRex Flp-in system. (29,30) Following clonal cell selection, tetracycline addition to the culture media caused a robust overexpression of the CaSR protein in both WT and mutant Gln723 cells (Fig. 1A). We assessed whether NPSP795 could rectify alterations in Ca 2+ e -mediated Ca 2+ i responses in the mutant TRex-CaSR-Gln723 cells. As reported previously, (26,28) (Fig. 1B).
We also investigated the effect of NPSP795 on pERK responses in mutant Gln723 CaSR-expressing cells following exposure to increasing Ca 2+ e concentrations. The maximal fold-changes in pERK responses of the untreated Gln723 CaSR mutant were shown to be significantly increased compared with WT cells (Gln723 = 24.9 AE 2.3 versus 20.7 AE 4.0 for WT; p < 0.05; Fig. 1C). Treatment of mutant Gln723 CaSR-expressing cells with 20nM NPSP795 decreased the maximal pERK fold-change response to 10.4 AE 1.7, which was not significantly different from WT (Fig. 1C). Thus, a 20nM dose of NPSP795 normalized the gainof-function associated with the Leu723Gln CaSR mutation. In contrast, the addition of 40nM NPSP795 significantly reduced the mutant Gln723 CaSR pERK fold-change response to 6.9 AE 1.0 (p < 0.001) compared with WT (Fig. 1C).
Dose-dependent effects of NPSP795 on plasma PTH in Nuf mice
As NPSP795 rectified the altered signaling responses associated with the Nuf mouse CaSR mutation (Leu723Gln) in vitro, we pursued studies to determine the effects of this calcilytic in Nuf mice, which have hypocalcemia and reduced plasma PTH concentrations (Table 1). A dose-ranging study was undertaken with NPSP795 to establish the doses required to maximally increase PTH concentrations. NPSP795 was administered at 0, 1, 3, 10, and 30 mg/kg doses by s.c. bolus injection to WT and Casr +/Nuf mice, and at 0, 3, 10, and 30 mg/kg doses to homozygous (Casr-Nuf/Nuf ) mice. A plasma sample was obtained at 30 min for PTH measurement. This time-point was selected as plasma PTH concentrations have been reported to be maximally increased at 15 to 30 min following calcilytic administration in rats. (32) NPSP795 administration to WT mice led to dose-dependent increases in PTH concentrations, with 10 and 30 mg/kg doses causing maximal elevations of PTH ( Fig. 2A). NPSP795 administration also caused dose-dependent PTH elevations in Casr +/Nuf and Casr Nuf/Nuf mice, although higher calcilytic doses were required to increase PTH in mutant mice (Fig. 2B,C). Thus, Casr +/Nuf and Casr Nuf/Nuf mice required a minimum of 10 and 30 mg/kg of NPSP795, respectively, to significantly increase plasma PTH, compared with a minimum of 3 mg/kg for WT mice ( Fig. 2A-C). Casr +/Nuf and Casr Nuf/Nuf mice treated with the highest (30 mg/kg) NPSP795 dose showed significantly reduced plasma PTH concentrations of 371 AE 30 ng/L and 114 AE 18 ng/L, respectively (p < 0.001), compared with a PTH concentration of 931 AE 26 ng/L for WT mice treated with the same dose. However, an analysis of fold-change PTH responses at the 30 mg/kg dose showed that mutant mice have similar or increased PTH responses compared with WT mice (Fig. 2D). Thus, WT and Casr Nuf/Nuf mice all showed ≥10-fold increases in plasma PTH compared with respective vehicle-treated mice, whereas Casr +/Nuf mice showed significantly higher (>15-fold) PTH responses (Fig. 2D).
Time-dependent effects of NPSP795 on plasma PTH, calcium, phosphate, urea, and creatinine in Nuf mice To determine whether NPSP795 can rectify the hypocalcemia of Nuf mice (Table 1), a submaximal dose (25 mg/kg) was administered by s.c. bolus injection, and plasma concentrations of adjusted-calcium, phosphate, PTH, urea, and creatinine measured at 0, 0.5, 1, 3, and 6 hours postdose in WT and Casr +/Nuf mice, and at 0, 0.5, and 3 hours in Casr Nuf/Nuf mice (Fig. 3). Administration of 25 mg/kg NPSP795 led to a maximal rise in plasma PTH concentrations at 30 min postdose, which returned to baseline values by 3 hours postdose in WT and Nuf mice (Fig. 3A-C). The rise in PTH was associated with significant elevations of plasma calcium at between 1 to 3 hours postdose in WT, Casr +/Nuf , and Casr Nuf/Nuf mice when compared with respective untreated mice (Fig. 3D-F). Thus, NPSP795 significantly increased plasma calcium in Casr +/Nuf mice from 1.87 AE 0.03 to 2.16 AE 0.06 mmol/L, and in Casr Nuf/Nuf mice from 1.70 AE 0.03 to 1.89 AE 0.05 mmol/L (Fig. 3E,F). These increases in plasma calcium of between 0.20 and 0.30 mmol/L postdose were similar to that observed for WT mice treated with NPSP795 (Fig. 3D). Administration of this calcilytic also led to significant increases in plasma phosphate in WT and Casr +/Nuf mice (Fig. 3G-I).
Single-dose administration of NPSP795 was well-tolerated by the study mice. However, an increase in plasma urea concentrations was observed (Fig. 3J-L), which was associated with normal plasma creatinine concentrations in WT, Casr +/Nuf , and Casr Nuf/Nuf mice treated with NPSP795 ( Fig. 3M-O). The sources of increased plasma urea with normal plasma creatinine include dehydration, heart failure, gastrointestinal bleeding, a high-protein diet, and catabolic states caused by trauma, starvation, and the use of glucocorticoid drugs. (33) Among these, the most likely cause was dehydration. Moreover, the rise in plasma urea was transient and had normalized at 6 hours postdose (Fig. 3J-K).
Discussion
These findings demonstrate that the amino-alcohol calcilytic, NPSP795, rectifies the gain-of-function associated with the Nuf mouse germline CaSR mutation, Leu723Gln, (28) and increases plasma PTH and calcium concentrations in this ADH1 mouse model. We selected 20nM and 40nM NPSP795 concentrations for the cellular signaling studies as NPSP795 has a similar potency (IC 50 = 73nM) to that of the NPS 2143 calcilytic compound (IC 50 = 43nM), (16) and our previous studies involving NPS 2143 have demonstrated these concentrations to significantly increase the Ca 2+ i EC 50 value of cells expressing the Nuf mutant CaSR. (26) NPSP795 was shown to normalize increases in Ca 2+ i and pERK-signaling responses of cells stably expressing the Nuf mutant CaSR, which is in keeping with the reported effects of NPSP795 on ADH1-causing germline gain-of-function CaSR mutations. (23) Our in vivo studies showed that single-bolus administration of NPSP795 significantly increases plasma PTH concentrations in a dose-dependent manner in Nuf mice. However, substantially higher doses of NPSP795 were required to increase PTH secretion in Casr +/Nuf and Casr Nuf/Nuf mice compared with that required for WT mice ( Fig. 2A-C). This was particularly evident for Casr Nuf/Nuf mice, which required 30 mg/kg NPSP795 to increase plasma PTH compared with 3 mg/kg of NPSP795 for WT mice. These findings suggest that parathyroid glands harboring the Gln723 mutant CaSR may have reduced sensitivity to NPSP795 compared with the parathyroid glands of WT mice. However, Casr +/Nuf and Casr Nuf/Nuf mice given the highest (30 mg/kg) NPSP795 dose had similar or increased fold-change elevations in PTH responses compared with WT mice (Fig. 2D). These results are consistent with the biochemical features of ADH1 being rectifiable through normalization of the parathyroid set point for PTH release. Moreover, these findings differentiate ADH1 from hypoparathyroidism, which is generally associated with irreversible destruction of the parathyroid glands. (34) Bolus dose administration of NPSP795 also significantly increased plasma calcium concentrations in Casr +/Nuf and Casr Nuf/Nuf mice, and the 0.2 to 0.3 mmol/L increase in plasma calcium was similar to that observed in WT mice treated with NPSP795 (Fig. 3). This finding contrasts with a reported clinical trial involving patients with ADH1, which observed no alterations in ionized blood calcium concentrations of patients with ADH1 given 5 to 30 mg of NPSP795 by i.v. administration. (23) However, our study used a markedly higher (25 mg/kg) dose of this calcilytic, and the difference in dosing between the patient and mouse study likely explains the differences observed in circulating calcium responses. The plasma calcium concentrations of Casr +/Nuf mice treated with 25 mg/kg of NPSP795 remained significantly lower than that of untreated WT mice (2.16 AE 0.06 versus 2.50 AE 0.04 mmol/L; p < 0.01). However, we postulate that repetitive dosing with 25 mg/kg of NPSP795 will lead to normocalcemia in Casr +/Nuf mice, as this dose led to substantial (>15-fold) elevations of plasma PTH (Fig. 3B). Consistent with this, a study reporting treatment of ADH1 mice with the JTT-305/MK-5442 calcilytic demonstrated that administration of a single dose (20 μg/g body weight) caused marked increases in serum PTH, but did not normalize serum calcium, whereas longer-term administration of this dose induced normocalcemia in ADH1 mice. (24) NPSP795 also increased plasma phosphate concentrations in WT and Casr +/Nuf mice, and such effects have previously been observed following administration of the NPS 2143 calcilytic compound to Nuf mice and an ADH2 mouse model, which harbors a gain-of-function Gα 11 mutation, Ile62Val. (26,35) The cause of the increase in phosphate is unclear, as calcilytic treatment would be expected to lower plasma phosphate by inducing PTH-mediated renal phosphate excretion. (8) In keeping with this, ADH1 mice harboring a CaSR mutation, Cys129Ser, showed a decrease in serum phosphate concentrations following treatment with the JTT-305/MK-5442 calcilytic compound. (24) The hyperphosphatemia observed in the current study may potentially have arisen because of dehydration and decreased renal function caused by the acute rise in plasma calcium following NPSP795 treatment, which may activate the kidney CaSR, thereby leading to polyuria. (36) In support of this, WT and Nuf mice had elevations of plasma urea, which accompanied the rise in plasma calcium following NPSP795 treatment (Fig. 3). Moreover, the physiological stress associated with drug administration and blood sampling may have reduced water intake in the mice, thus exacerbating the dehydration and consequent increase in plasma urea. However, the increase in plasma urea concentrations appeared to be transient and had normalized in WT and Casr +/Nuf mice by 6 hours postdose (Fig. 3).
A limitation of this study is that despite Casr Nuf/Nuf mice being viable, (28) fewer homozygotes were born than expected. Thus, the range of NPSP795 doses and study time points, which could be evaluated in Casr Nuf/Nuf mice, were limited. However, the reduced numbers of Casr Nuf/Nuf mice did not affect the main objective of this study, which was to evaluate NPSP795 in Casr +/Nuf mice, as this mouse genotype is a model for patients with ADH1 harboring germline heterozygous gain-of-function CaSR mutations. Furthermore, our study only evaluated the effect of NPSP795 on the in vitro and in vivo consequences of a single gain-of-function CaSR mutation. However, it is likely that this calcilytic will be of benefit for a range of ADH1-causing CaSR mutations. Consistent with this, NPSP795 has been previously shown to improve the gain-of-function caused by mutations located in the extracellular (Glu228Ala, Glu228Lys, and Gln245Arg) and transmembrane (Ala840Val) domains of the CaSR. (23) In conclusion, single-dose administration of NPSP795 has been shown to cause dose-dependent increases in PTH and to ameliorate the hypocalcemia in an ADH1 mouse model. Thus, the NPSP795 calcilytic represents a potential targeted therapy for ADH1. Longer-term dosing studies are required to investigate whether NPSP795 can rectify the hypocalcemia caused by ADH1.
Disclosures
FMH and RVT have received grant funding from NPS/Shire Pharmaceuticals and GlaxoSmithKline for studies involving the use of calcilytic compounds. RVT has received grants from Novartis Pharma AG and the Marshall Smith Syndrome Foundation for unrelated studies. | 2020-08-06T09:04:14.172Z | 2020-07-26T00:00:00.000 | {
"year": 2020,
"sha1": "525569bdb4abbd32a7daf60d2ca4fe13ba36cda6",
"oa_license": "CCBY",
"oa_url": "https://asbmr.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jbm4.10402",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d550b3e3cb51265b0341a0f32a4b3ecb19e3b47",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
18247626 | pes2o/s2orc | v3-fos-license | Controllable Synthesis of Graphene by Plasma‐Enhanced Chemical Vapor Deposition and Its Related Applications
Graphene and its derivatives hold a great promise for widespread applications such as field‐effect transistors, photovoltaic devices, supercapacitors, and sensors due to excellent properties as well as its atomically thin, transparent, and flexible structure. In order to realize the practical applications, graphene needs to be synthesized in a low‐cost, scalable, and controllable manner. Plasma‐enhanced chemical vapor deposition (PECVD) is a low‐temperature, controllable, and catalyst‐free synthesis method suitable for graphene growth and has recently received more attentions. This review summarizes recent advances in the PECVD growth of graphene on different substrates, discusses the growth mechanism and its related applications. Furthermore, the challenges and future development in this field are also discussed.
Introduction
Graphene, an atomically thin crystal with carbon atoms arranged into a honeycomb lattice, has attracted numerous interest due to its extremely high carrier mobility, ambipolar electric field effect, room temperature quantum Hall effect, low optical absorption, high specific area and thermal stability. [1][2][3][4][5] Pristine graphene was originally obtained by mechanical exfoliation of graphite in 2004, [1] versatile methods have since been developed toward synthesis of graphene, such as oxidation of graphite, [6] liquid-phase exfoliation, [7,8] chemical vapor deposition (CVD) [9][10][11][12] and thermal decomposition of SiC. [13,14] are also widely used in PECVD growth. Plasma generator is the core of the PECVD, can mainly be categorized into three types depending on the power source for plasma generation, i.e., microwave (MW) plasma (commonly 2.45 GHz), radio frequency (RF) plasma (commonly 13.56 MHz) and direct current (dc) plasma. The MW plasma is a type of high-frequency electromagnetic radiation in the GHz range. To date, MW-PECVD has been used extensively in the synthesis of graphene and its derivatives such as CNTs, nanowalls and diamond films. RF plasma is another popular source with domain frequency in MHz range. The energy of an RF generator is coupled to the plasma in three main modes: the evanescent electromagnetic (H) mode, the propagating wave (W) mode and the electrostatic (E) mode W-mode. H-mode inductively coupled plasma (ICP) has the advantage of high energy density and a larger plasma volume, thus yielding high growth rates. In contrast, E-mode capacitively coupled plasma cannot be used as an independent plasma source due to relatively low energy. As the simple setup, dc glow plasma is also widely used source. There are two geometric designs for dc glow plasmas: parallel-plate and pin-toplate, which can produce uniform and nonuniform plasma sources, respectively.
Gaseous species are essential for the synthesis of graphene and its derivatives, which can be categorized into three functional groups. (i) The gaseous precursors containing carbon provides carbon radicals for graphene growth via plasmaenhanced reaction. (ii) Gases such as H 2 , O 2 are added as amorphous carbon etchants to produce high-quality graphene and its derivatives. (iii) Gases such N 2 , NH 3 normally are used to achieve the doping of as-grown graphene, which can tailor the electrical properties of graphene.
During the plasma-enhanced process, the source gas is activated by the energetic electrons generated in the plasma. The ionization, excitation, and dissociation of the source gases all occur in the low-temperature plasma process. First, the ionization processes proceed via interactions between energetic electrons and gas molecules. Second, high-energy ions generated in ionization processes subsequently react with source gas molecules. Finally, various radicals form via various dissociation reactions. These radicals are more reactive than ground-state atoms or molecules, which enable the formation of graphene and its derivatives on catalyzed or noncatalyzed surfaces at low temperature. In order to optimize the synthesis process, the plasma-enhanced process needs to be understood theoretically and experimentally.
Simulations of the plasma process have been performed to optimize the growth parameters. A number of plasma models have been developed based on different gas systems. [20][21][22] In these models, tens of species (ions, electrons, neutrals and radicals) and reactions are considered. For the methane or methane/hydrogen plasma, 8 neutrals, 11 ions and 5 radicals have been taken into account in 1D fluid model. [23] 27 electron reactions, 7 ion-neutral reactions and 12 neutral-neutral reactions are included in this model. The densities of radicals and ions are found to vary with distance in the plasma based on 1D fluid models. Extending to the radial directions, 2D fluid model is proposed. [24] According to the simulation results, the electron density reaches the maximum at the fringes of the electrodes where the potential V change dramatically, as shown in Figure 2a,b. Thus, more frequent electron related reactions occur in these regions, leading to the formation of high-energy ions and neutrals densities, as shown in Figure 2c,d.
The densities of ions and radicals are also influenced by the power, gas mixture ratio, gas flow and pressure. In the 1D fluid model of methane/hydrogen system, all species except methane increase slightly at higher plasma power due to more dramatical reactions among electrons, ions and radicals, as shown in Figure 3a. It should be mentioned that effective power values (about 50% of the generator power) used in the model may be different from the actual power in the plasma. Hence, a good agreement cannot be obtained between the calculated and experimental results at low plasma power. The gas mixture ratio is also a key factor in species distributions. Nonradical neutrals (H 2 , CH 4 C 2 H 6 , C 3 H 8 , C 2 H 4 , and C 2 H 2 ) increase linearly with increasing CH 4 . Radicals (H, CH 2 , CH 3 , and C 2 H 5 ) remain almost constant at different CH 4 /H 2 ratios. For the ions, the H 2 + and H 3 + show a dramatic decrease with the rising of CH 4 gas flow while slight increase could be observed for CH 5 + and C 2 H 5 + ions, as shown in Figure 3b. When increasing the total gas flow, less H 2 and related ions (H 2 + and H 3 + ) form in the plasma, but other radicals and ions (CH 5 + , C 2 H 5 + , CH 4 + , CH 3 + ) show no variations with increasing CH 4 , as shown in Figure 3c. The pressure is also believed to influence the reactions among electrons, ions, radicals and neutrals. In the fluid model, nonradical neutrals increase slightly as a function of pressure. Radical concentrations do not change with pressure. For ions, C 2 H 5 + ions increase while the other ions (CH 5 + , CH 4 + , CH3 + ) drop drastically, as shown in Figure 3d.
Besides the CH 4 and CH 4 /H 2 systems, several models are also used for the Ar/CH 4 /H 2 , [25] Ar/C 2 H 2 /H 2 , [26] Ar/C 2 H 2 / NH 3 , [27] CH 4 /NH 3 , [28] and C 2 H 2 /NH 3 [28,29] systems. For gas systems including Ar, Ar atoms become the dominant neutrals in all species. [25,26] The densities of CH 4 and H 2 decrease more dramatically with the plasma power. For nonradical and radical neutrals, higher hydrocarbons (C x H y ) are more likely to break into radical fragments due to more intense electronneutral reactions in Ar plasma, compared to the CH 4 and CH 4 /H 2 systems. In the plasma models containing NH 3 , additional 22 species, 43 electron-impact reactions, 48 ion-neutral reactions and 67 neutral-neutral reactions should be considered. For the Ar/C 2 H 2 /NH 3 systems, [27] [24] Copyright 2002, American Institute of Physics.
be produced from C 2 H 2 and NH 3 dissociation reactions. It is speculated that hydrogen related species generated from the NH 3 dissociation reactions produce evident etching effects on amorphous carbon structures which is important for the formation of pure CNT structures. Pure and above 30% C 2 H 2 plasma results into amorphous carbon like films and obelisklike nanotubes. [29] The plasma process is also investigated by experiments such as laser-induced fluorescence spectroscopy (LIFS), [30] infrared laser absorption spectroscopy (ILAS), [31] mass spectrometry (MS), [25,32] and optical emission spectroscopy (OES). [25] ILAS can be used to measure the densities of infrared radiated reactive species. [31] The absorption lines of different species can be obtained for the concentration determination. By LAS, the densities of radicals (CH 3 , CH, and CH 2 ) and neutrals (C 2 H 4 , C 2 H 6 ) are found to change with plasma power. MS is a common method of monitoring species at the substrates, but it is difficult to assess unstable species. [32] Hence, the MS analysis of species is limited to low C x H y neutrals and radicals. The OES provides the information of related ions in plasma processes. [25] All the dominant ions, radicals and neutrals can be studied by MS and OES. Moreover, the reactions could also be analyzed by measuring the radicals, ions and neutrals.
Plasma-enhanced process is a complicated process containing various kinds of species and reactions, which play an important roles in the plasma-enhanced growth process of graphene. For example, the density of carbon related species can influence the morphology of the as-grown graphene. Additional species such as H 2 , Ar can serve as amorphous carbon etchant toward high-quality graphene. Other additional species such as NH 3 , N 2 can dope graphene with heteroatoms and adjusting the electrical properties. All simulative and experimental results above offer information . Calculated densities of nonradical neutrals, radicals, ions, and electrons as a function of a) radio frequency generator power, b) CH 4 and H 2 gas flow mixture, c) CH 4 , and d) pressure. Reproduced with permission. [23] Copyright, 2001, American Institute of Physics. about these species and reactions, which contribute to understanding kinetic growth in the plasma process, which are of great significance to realization of the controllable synthesis of graphene.
Plasma-Enhanced Growth of 2D Graphene
Graphene has been considered as an attractive candidate for future electronic materials due to its excellent electrical properties and atomic thickness. The 2D structure enables graphene to be adapted to current photolithography and integration processes. Among existing synthetic methods, CVD has been considered one of the most promising methods as it can grow high-quality graphene films at relatively low cost. Large area polycrystalline and millimeter-sized single crystalline graphene has been synthesized and applied in electronics. [9][10][11][12]16] Compared to thermal CVD, PECVD possesses more potential in future electronic applications due to the advantages of low growth temperature and free posttransfer process. Until now, successful synthesis of high-quality graphene via PECVD on metal, dielectric and 2D substrates has been reported.
Plasma-Enhanced Growth of Graphene on Transition Metal Substrates
Thermal CVD growth of graphene on transition metal substrates usually requires a high temperature (800-1000°C), which is still too high for industrial production. Therefore, low-temperature synthesis of graphene remains challenging for applications in electronics. PECVD can achieve the lowtemperature growth of high-quality graphene films on transition metal substrates such as Ni, [33][34][35] Cu, [36,37] Co, [35,38] and so on. Woo et al. [34] synthesize uniform graphite films on Ni foils at low growth temperature of 850 °C by using remote RF-PECVD. Pure ethylene is used as the carbon source. Raman spectra show negligible D peak, which indicate the high quality of as-grown graphene films. Different ratios of G and 2D peak at different positions suggest that the thickness of graphene film is inhomogeneous from monolayer to multilayer. Similar high-quality graphene films are synthesized at lower temperature ranging from 650 to 700 °C with a gas mixture of methane and hydrogen. [39] The remote plasma configuration used in the work promotes the growth of planar graphene films with the electric field parallel to the substrates, rather than perpendicular electric field in other plasma configuration. Besides, large-area surface wave plasma (SWP) is also used to produce large-area graphene with growth temperature lower than 400 °C due to higher density plasma and radicals. By using the SWP-CVD, graphene-like films is first grown on Al foil, although the melting point of Al is too low to be used as the substrate in CVD or PECVD. [40] However, Graphene films fabricated by SWP-CVD on Cu and Al foils show a high density D peak and low density D' peak which are attributed to abundant edges and boundary effects. High-quality monolayer graphene can be synthesized on the surface of transition metal by optimizing the growth parameters. Kim et al. [41] demonstrate a low-temperature synthesis of monolayer graphene on polycrystalline Ni foils by MW PECVD. In the work, monolayer graphene is obtained on polycrystalline Ni foils under various ratios of hydrogen and methane with growth temperature from 450 to 750 °C, as shown in Figure 4a,b. The layer number of the as-grown graphene depends on the gas mixture ratio. When the ratio of hydrogen and methane drop to 10:1, graphene with six layers rather than monolayer is grown on the Ni foils, as shown in Figure 4a. Although, the growth temperature can be decreased due to the high energy provided by MW plasma, more defects form during the growth process at low temperature. As shown in Figure 4b, higher D peaks can be observed with the growth temperature of 450 °C. An obvious D peak and shoulder peak (D′ peak) appear in the Raman spectrum. The D′ peak is mainly attributed to the low degree of crystalline at low growth temperature.
The typical dissolution precipitation occurs on transition metal with high carbon solubility such as Ni during the plasmaenhanced growth process. The substrate temperature, the film thickness and the deposition time make obvious influence on the thickness and the crystalline structure of as-grown graphene. Peng et al. [33] have investigated the growth of graphene on Ni film by RF-PECVD. The RF plasma is utilized to enhance the decomposition of methane into carbon species under hydrogen-free conditions. The carbon species randomly dissolve in the interstices of Ni atoms. During the cooling process, the dissolved carbon species precipitate and arrange into hexagonal ring structure of graphene, as illustrated in Figure 5a. The thresholds of growth temperature, film thickness and deposition time are investigated respectively. Below the threshold temperature of 475 °C, carbon species generated by RF plasma are unable to dissolve into the nickel films, thus no graphene films grow on the nickel films, as shown in Figure 5b. The quantity of dissolved carbon species depends on the nickel thickness. When the thickness of the nickel films is below 10 nm, the dissolved carbon species are not enough to form continuous graphene films on the surface of the nickel. No carbon related peaks could be observed in the Raman spectra. When increasing the thickness of nickel films to over 10 nm, amorphous carbon structures rather than graphene form without 2D peaks. When the thickness of nickel films is increased to Raman spectra for graphene synthesized by microwave plasma-enhanced CVD a) at various methane/hydrogen ratios, b) at different growth temperature. Reproduced with permission. [41] Copyright, 2011, American Institute of Physics. over 30 nm, graphene layers with characteristic peaks (D, G, and 2D peaks) could be obtained, as shown in Figure 5c. Short growth time below 10 s fails to accumulate enough carbon species in Ni films to form graphene in the subsequent precipitaion process. Figure 5d shows that higher D peak and suppression of 2D peaks can be observed with longer growth time, indicating that not only sp 2 but also sp 3 carbon form in the growth process in the absence of hydrogen.
The dissolution precipitation occurs on not only the surface but also the underneath of transition metals. High-quality monolayer graphene with a hexagonal domain has been synthesized along the interface between deposited Ni films and SiO 2 /Si substrate by the rapid-heating plasma CVD (RH-PCVD). [42] The growth process is illustrated in Figure 6a. During the plasmaenhanced process, high-energy carbon radicals generated in plasma are accelerated toward the substrate and penetrate Figure 5. a) Carbon atoms dissolved in the interstices of nickel lattice and arranged into hexagonal ring structure on the surface of nickel during the precipitation. Raman spectra for graphene synthesized by radio frequency plasma-enhanced CVD b) with different growth temperature, c) on Ni films with different thickness, d) with different deposition times. Reproduced with permission. [33] Copyright 2013, Royal Society of Chemistry. Figure 6. a) Schematic illustration of the direct growth of graphene on SiO 2 /Si with a ultrathin Ni film as the buffer layer: A nickel films is deposited on a SiO 2 /Si substrate; CxHy ions or radicals accelerated toward the nickel surface and diffuse into regions near the interface; carbon atoms preferentially precipate on the interface to form hexagonal graphene; graphene remians on the SiO 2 /Si substrate after chemically etching the nickel films. b) The optical and Raman maps of hexagonal graphene domains on the SiO 2 /Si grown by RH-PCVD. c) The tranfer curve of the graphene grown by RH-PCVD at 950 °C with NH 3 flow rates of 0, 5, and 15 sccm, respectively. An obivious N doping can be observed from the left to the right. Reproduced with permission. [42] Copyright 2012, American Chemical Society. through the surface of the deposited Ni films. As a result, the density of dissolved carbon inside nickel layer is higher than that at the surface, leading to selective growth at the interface between Ni and SiO 2 /Si substrate. At the initial growth stage, hexagonal domains of graphene with size of about 10-20 μm can be observed on the SiO 2 /Si substrate after etching the Ni layer, as shown Figure 6b. Similarly, large scale graphene films are also grown on the interface by adjusting the growth conditions. By using the Ni layer as the buffer layer, high-quality graphene can be synthesized on the SiO 2 /Si substrate with high controllability in size and shape. Moreover, the electrical properties of the graphene can be tuned by introducing NH 3 plasma during the RH-CVD growth process. The negative shift in Dirac point can be seen in the transfer curve of N-doping graphene grown under NH 3 plasma, as shown in Figure 6c. Higher NH 3 flow rates result into more negative Dirac point.
For metal substrate with low carbon solubility, the surface catalyzed dissociation mainly occurs during the graphene growth process. Polycrystalline Cu foil has been used as an excellent substrate for growth of high-quality monolayer graphene over large area, due to its catalytic nature. [9] Instead of dissolving and precipitating on the surface of the transition metals, the hydrocarbon precursors directly dissociate on the surface of Cu, then assemble into graphene structure. In PECVD system, the dissociation on Cu surface would be enhanced by plasma source. For the graphene growth on the surface of Cu, both the carbon radicals generated from surface catalysis and plasma excitation contribute to the graphene growth. For conventional CVD, the growth of successive layer graphene is dramatically slowed down due to loss of catalytic Cu surface after the first layer graphene forms. However, for PECVD, the reactive carbon radicals from plasma-enhanced dissociation still contribute to the formation of successive layers at a relatively higher rate. [37] For metal substrate with high carbon solubility and catalyzed surface, both dissolution precipitation and catalyzed dissociation occur during the growth process. For example, Co is a transition metal with a high carbon solubility of 4.1 at% and catalytic property for the dissociation of hydrocarbon precursors. It has been reported that few-layer graphene can be successfully grown on polycrystalline Co films at 800 °C by RF-PECVD. [38] The first layer and the successive layers grow on the Co films by different mechanisms. [35] The effect of deposition time on graphitic degree and in-plane crystalline size has been investigated. Raman spectra show that all the graphene films deposited for different time (15, 40, 90, 360, 20 min) show similar characteristic peaks: D band, G band and 2D band. For short deposition time (15-40 s), the first layer graphene is formed by the enlargement of small graphene domains and in-plane crystalline size increases during the growth process. When the growth time is increased to 90-360 s, the Co film has been covered with the first layer graphene, CH x radicals could not be dissociated into C 2 by Co catalyzed dissociations, which are necessary for the 2D extension of graphene domains. CH x radicals generated in the plasma can terminate the growth of graphene domains and form new nucleation sites of graphene on Co film, leading to more edge and boundary structures on the Co film. Therefore, the ratio of I D /I G in Raman spectra initially decreases and then increases with the growth process.
Plasma-Enhanced Growth of Graphene on Dielectric Substrates
Large-area graphene with high quality has been grown on transition metal substrates such as Ni or Cu foils by CVD. However, the metal-catalyzed grown graphene requires posttransfer and catalyst removal for the applications in electronics. Chemical contamination and structural defects (wrinkles, or even breakage) cannot be avoided due to the metal etching process. It is widely believed that direct growth of graphene films on dielectric substrates would promote the widespread application of graphene films in future electronics. The pioneering study has reported synthesis of graphene on dielectric substrates by pre-depositing sub-micrometer-thick Cu films as the buffer layer. The buffer Cu layer is then removed by evaporation in a high temperature low-pressure atmosphere, but the evaporation of metal films still induces contaminations, wrinkles, and breakage in graphene films. [43] Semiconductors and metal oxides have been reported as the catalysts for the growth of CNTs, [44][45][46] indicating their possibility as substrates for the growth of graphene. Several reports have shown that graphene-like films can be synthesized on Si, SiN, Al 2 O 3 , SiO 2 , and MgO. [47][48][49] However, graphene-like films grown by these methods normally suffer from poor crystalline quality and coexisting amorphous carbon. In order to improve the quality of the as-grown graphene, higher growth temperature (1100-1650 °C) is necessary. Chen et al. have reported the direct growth of high-quality polycrystalline graphene films on SiO 2 and SiN substrates by thermal CVD. [18,19] The graphene films grown by this method exhibit excellent field effect mobility from 500 to 1000 cm 2 V −1 s −1 . However, the high growth temperature is not compatible with existing semiconductor technologies.
Plasma-enhanced CVD has attracted much interest as an important method to achieve low-temperature growth of graphene on dielectrics. Pioneering researches in this field demonstrate nanographene films growth on various substrates such as SiO 2 , atomic layer deposited (ALD) Al 2 O 3 , sapphire, quartz, mica, Si and so on at a relatively low temperature ≈550 °C. [50,51] The growth process of nanographene films on SiO 2 substrates has been studied using atomic force microscope (AFM), as shown in Figure 7a-c. In the early growth stage, nanographene islands uniformly nucleate on the SiO 2 substrates with 1.2-1.5 nm thickness, corresponding to 2-3 layers graphene. With the continuous growth, higher density of nanographene islands form and coalesce into a continuous and uniform nanographene films. The as-grown nanographene films consist of densely packed nanoislands. Raman spectra show characteristic peaks of a graphitic structure with D, G, and 2D peaks and the XPS C 1s spectrum is well fitted with a dominant peak assigned as sp 2 carbon, as shown in Figure 7d,e. All these results indicate high crystalline nanographene films without amorphous carbon or diamond. The D peak is mainly due to small crystalline or edges of nanographene films. Nanographene films with various transmittance (85%-92%) and resistance (40-7 kΩ sq. −1 ) are obtained using different growth time, indicating that the optical and electrical properties of nanographene films could be adjusted by growth time, as shown in Figure 7f. It should be noticed that overall film resistance is mainly due to the resistance between nanogaphene clusters. Negative temperature-dependence of the resistance can be observed due to the thermal generation of charge carriers.
With the advantage of direct growth on SiO 2 /Si substrates, two-terminal nanographene field-effect devices could be fabricated by electron-beam lithography and lift-off techniques without posttransfer process. The nanographene based devices exhibit an ambipolar field-effect behavior with the mobility of 15 cm 2 V −1 s −1 . The mobility is far lower than graphene films grown by thermal CVD due to small crystalline size and numerous edges. Moreover, it should be mentioned that the gate modulation of nanographene based devices is very weak and the on/off ratio is close to 2. Similar nanographene films can be achieved on other substrates such as ALD Al 2 O 3 , sapphire, quartz, mica, Si, SiC.
The plasma-enhanced CVD enables the growth of nanographene films on arbitrary dielectric substrates. However, applications of nanographene films in electronics are limited to weak gate modulation and low field effect mobility. In order to enhance the electrical properties of the as-grown graphene, Wei et al. develop the critical crystal growth of graphene by introducing H 2 plasma in PECVD. [52] In the growth process, the catalyst-free crystal growth of graphene is observed on substrates such as sapphire, HOPG, and SiO 2 /Si with the growth temperature as low as 450 °C. Figure 8a illustrates PECVD growth process including seed preparation, seed activation, crystal growth and further growth into continuous film. All the exfoliated graphene flakes, nucleated graphene islands and patterned graphene could be used as further growth seeds. H 2 plasma is introduced to activate edges for the critical crystal growth. Edge growth, edge etching and nucleation are observed during the plasma-enhanced growth process, as shown in Figure 8c-e. The edge growth is observed at the edge of exfoliated graphene with three layer thickness on critical conditions. After 60 min plasma-enhanced growth process, the edges of each layer move by 79, 117 and 158 nm respectively, as shown in Figure 8c. The edge etching occurs at lower temperature or higher H 2 content with the width decreasing from 349 nm to 181 nm, as shown in Figure 8d. Opposite reaction conditions result into the nucleation of graphitic clusters with the thickness of monolayer graphene, as shown in Figure 8e. The critical conditions for growth, etching and nucleation have been systematically investigated, as shown in Figure 8b. Besides of exfoliated graphene, high-quality hexagonal graphene crystals (HGCs) can be directly grown on HOPG or SiO 2 /Si substrates by using graphitic clusters as the grown seeds, as shown in Figure 8g-h. Small graphitic clusters are first nucleated at higher temperature (650 °C) and then grown into HGCs at critical growth temperature (600 °C). The calculated field effect mobility of the asgrown HGCs are in the range of 550-1600 cm 2 V −1 s −1 , similar to the metal-catalyzed CVD graphene and exfoliated graphene.
Although better electrical properties can be obtained with HCGs grown by critical PECVD, this method still suffers from low growth rate and high nucleation density. For example, the growth rate of the critical crystal growth by PECVD is normally below 10 nm min −1 (1 nm min −1 at 250 mTorr, 4.5 nm min −1 at 48 mTorr), which is far lower than the growth rate of , and (c). All the samples featured with isolated D, G, and 2D peaks. e) XPS spectra of the as-grown nanographene. The C 1s spectrum mainly fitted with sp 2 carbon. f) Transmittance and corresponding resistance at different growth time. Reproduced with permission. [50] Copyright 2011, Springer.
metal-catalyzed CVD. Moreover, the nucleation density of the PECVD can reach ≈10 7 nuclei cm −1 , which is about six orders of magnitude higher than CVD with reduced nucleation density (≈4 nuclei cm −1 ). [16] Large domain single crystalline graphene has not been achieved by critical PECVD so far.
Plasma-Enhanced Growth of Graphene on 2D Substrates
Although SiO 2 /Si substrates have been widely used as the common substrate for fabrication of the graphene devices, the electrical transport properties of graphene are largely limited to the surface roughness, impurities, charged surface states and large lattice mismatch of amorphous SiO 2 . 2D materials have been found to be better substrates due to its electrical properties and ultra-flat surface. Graphene based devices exhibit high carrier mobility when using h-BN as the substrate [53] or encapsulated by molybdenum disulfide, tungsten disulfide or hexagonal boron nitride (h-BN). [54] As the insulting analogue of graphene, h-BN has been explored as one of the best dielectric substrates for graphene-based electronics, due to its atomically smooth surface, [55] self-cleaning and small lattice mismatch (1.7%) with graphite. [56] The dielectric constant ( ≈ 3-4) and breakdown voltage (V ≈ 0.7 V nm −1 ) of h-BN is comparable to those of SiO 2 , which enables it an excellent gate dielectric material. Reproduced with permission. [52] graphene devices on single crystal h-BN substrates via polymethyl-methacrylate (PMMA) based transfer techniques. [53] The field effect motilities of graphene devices on h-BN substrates are almost an order of magnitude higher than devices on SiO 2 /Si substrates. Moreover, the h-BN can behave as an ideal tunnel barrier due to its large band gap (5.2-5.4 eV) and atomically thin structure like graphene. The alternative device fabricated by the h-BN sandwiched by two graphene electrodes shows high on/off ratio of 10 6 , which provide another effective solution to low on/off ratio in graphene based devices. [57] However, graphene/h-BN devices fabricated by mechanical transfer normally suffer from unstable electrical performance due to chemical contamination, structure defects and uncertain alignment between graphene and h-BN. Instead, graphene/h-BN hetero-structures are fabricated by CVD techniques. [58][59][60][61][62][63] The precise alignment between grown graphene and h-BN provides more favorable device characters. For example, theoretical calculations predicts that AB stacked graphene/h-BN could open a band gap of 53 meV in graphene. [56] Recently, large singlecrystalline graphene domains up to 20 μm can be synthesized on h-BN with the a gaseous catalyst silane. [63] The Hall mobility can reach 20 000 cm 2 V −1 s −1 and the secondary Dirac cone can be observed due to the moiré pattern.
Direct growth of graphene on h-BN by thermal CVD normally suffers from high growth temperature (above 1200 °C) and low growth rate. The PECVD is also used to achieve the low temperature (≈550 °C) epitaxial growth of graphene on h-BN. [64] The growth process is illustrated in Figure 9a. The methane is dissociated into various reactive radicals for nucleation and growth of graphene at edges. Both monolayer and bilayer graphene have been grown on h-BN with different growth duration. As shown in Figure 9b, the Raman spectra of mono-and bi-layer graphene grown on h-BN are featured with the characteristic peaks of graphene and h-BN. The splitted G peak in Raman spectra of monolayer graphene is observed indicating the zigzag edges of graphene domains. The 2D peak of bilayer graphene is fitted with four Lorentz curves, indicating a Bernal (AB) stacking. The epitaxial growth of graphene on h-BN is similar to the growth of nanographene films. As shown in Figure 9c about 0.39 nm, indicating the formation of monolayer graphene domains. These small grains enlarge and coalesce for longer growth time, as shown in Figure 9d. Further growth result into the nucleation of the second layer graphene with the height of 0.77 nm, as shown in Figure 9e. All nucleated hexagonal graphene domains have the same orientations due to the Van der Waals epitaxial growth on h-BN, which can coalesce into a continuous single crystalline graphene domain with enough growth time. Thus, higher carrier mobility can be expected in the epitaxial growth graphene films on h-BN without the affecting of boundaries. Due to the mismatch between graphene and h-BN, similar trigonal moiré pattern can be seen in the Figure 9f. All the moiré patterns are aligned with the zigzag edge. The trigonal moiré can lead to secondary Dirac cone in the as-grown graphene, as shown in Figure 9g,h.
Plasma-Enhanced Growth of VG Nanosheets with Controllable Morphology
VG nanosheets can grow into a variety of morphologies such as petal-, turnstile-, maze-, and cauliflower-like with different growth parameters. For example, different kinds of morphologies have been obtained by changing the feedstock gas type. [84] Shiji et al. synthesized VG nanosheets with different morphologies by RF-PECVD using CH 4 /H 2 , CF 4 /H 2 , CHF 3 /H 2 , and C 2 F 6 /H 2 as the precursors respectively. The VG nanosheets could grow into not only thin and wavy in CH 4 /H 2 system but also maze-like morphologies in fluorocarbon/H 2 systems. Moreover, the inter-layer spacing of VG nanosheets increased with using CH 4 /H 2 , CF 4 /H 2 , CHF 3 /H 2 , and C 2 F 6 /H 2 respectively. Teii et al. [85] also obtained VG nanosheets with two kinds of morphology (pure VG and VG intercepted by diamonds) by MW-PECVD using C 2 H 2 /N 2 /Ar and CH 4 /N 2 /Ar as the precursors respectively. It is found that the different morphology of VG nanosheets is due to the difference in the carbon dimer density. Low carbon dimer density of CH 4 /H 2 system contributed to the formation of sp 3 -C or amorphous carbon (a-C) while high carbon dimer density promoted the growth of the pure VG networks. VG nanosheets with more ordered orientation and more uniform sheet height could be obtained in C 2 H 2 /H 2 system, compared with CH 4 /H 2 precursors. [93] The growth regimes for microdiamond (MD), nanodiamond (ND), carbon nanowalls (CNWs) and ND/CNWs composite were proposed in Figure 10e. It can be seen that higher growth temperature and rich carbon dimers facilitate VG nanosheet structures with less ND or microdiamond. The morphology of VG nanosheets grown under different conditions are also illustrated in Figure 10a-d. However, the morphology of VG nanosheets may be different under similar conditions using other plasma-enhanced systems.
Plasma-Enhanced Growth of VG Nanosheets with Controllable Density
The density of VG nanosheets can vary depending on plasma power, gas composition, temperature and substrates. Yang et al. [92] have found that the density of VG nanosheets is strongly depend on the plasma power during the growth process of RF-CVD. Denser VG nanosheets can be obtained by increasing The results indicate that the electric field aligned perpendicular to the surface provided by plasma play a more important role in the formation of nucleation centers for vertical growth. Wang et al. [94] report controllable VG growth using CH 4 diluted in H 2 as gas precursors. The density of VG nanosheets is found to be strongly dependent on CH 4 concentration and growth temperature. Higher nucleation density and smaller lateral size could be obtained with higher concentration CH 4 (10-100%) and growth temperature (630-830 °C), as shown in Figure 11d,e and Figure 11f,g, respectively. However, growth temperature higher than 830 °C results into a high degree of corrugations. It is believed that the nucleation of VG nanosheets initiates at the boundary of buffer layer, which is a flat film comprising nanocrystalline graphene formed in the first growth stage. Different substrates induce different buffer layers. Davami et al. [90] produced different densities of VG nanosheets on Si, Au/Si, Ni/Si and Cu substrates. Although VG nanosheets on these substrates have the similar leaf shape morphologies, VG nanosheets grown on Si substrate are denser and thinner than that on Si/Ni and Si/Au substrates.
Plasma-Enhanced Growth of VG Nanosheets with Controllable Microstructure
All the carbon structures such as amorphous carbon, VG nanosheets, CNTs and diamond carbon can be produced by the PECVD. The effective removal of a-C has been widely recognized as a crucial and inevitable step to the formation of high-quality nuclei, and then further vertically grown VG nanosheets. [67,79] Several atoms and radicals play a key role in VG nanosheet growth. Zhu et al. [87] reported the synthesis of VG nanosheets with CH 4 /H 2 in an ICP system. Their work suggested that hydrogen atoms could act as effective etchant to remove the amorphous carbon. Shang et al. [95] used plasma excited nitrogen species to remove the amorphous carbon in TM-MW system. Oxygen atom and hydroxy radicals were also reported to etch the amorphous carbon during the PECVD process and show stronger etchant ability than hydrogen radicals. [96] The addition of Ar is able to provide high electrons and benefits the formation of radicals for VG growth. Goyett et al. [97] found that the addition of Ar promoted the formation of C 2 and H atoms, which benefit the growth of VG nanosheets. Based on OES measurement in a TM-MW system, Teii et al. [85] proposed C 2 formed via direct dissociation reaction in C 2 H 2 /H 2 /Ar gas.
VG nanosheets usually cannot be obtained with pure methane as the precursor because of the absence of the etchant radicals. However, with the use of high-intensity plasma source, VG nanosheets could be synthesized using pure methane as precursors. [94,98] Higher densities of H atoms and radicals can be obtained in these high energy plasma source systems, which act as effective amorphous C etchants.
Nucleation and Coalescence Mechanism
In thermal CVD process, the graphene can be grown on the surface of the transition metals by surface catalytic decomposition mechanisms. The metal catalyzed growth of graphene involves surface processes including dissociation of hydrocarbon molecules, formation of C clusters, surface diffusion and extension of graphene nuclei. [99] It is believed that the attachment of C clusters generated by surface catalytic decomposition to graphene nuclei is very important for the growth of high-quality graphene films in metal catalyzed CVD. However, the nucleation process is largely enhanced due to more reactive radicals generated in plasma-enhanced dissociation reactions. The nucleation and coalescence would be more important for plasma-enhanced growth of the graphene, especially at low temperature or without metal catalysts.
Koichiro Saiki et al. have reported the growth of graphene by PECVD on catalytic metal surfaces, and related mechanisms are also discussed in detail. [37] It has been widely reported that Cu surface has a catalytic effect that dissociates hydrocarbons into the activated carbon species. However, the catalyzed dissociation process cannot occur at the temperature below 600 °C. At low growth temperature, only activated carbon radicals generated by the plasma contribute to growth process of graphene on Cu surface. The graphene growth process is dominated by nucleation and coalescence of graphene patches with size of ≈20 nm, as shown in Figure 12a. Increasing the growth temperature to 900 °C, the catalytic effect of Cu arises. In the graphene growth process, carbon radicals can be generated by plasmaenhanced and metal-catalyzed dissociation. The activated carbon radicals generated by catalytic dissociation can enlarge the size of nucleated graphene patches to ≈40 nm, as shown in Figure 12b. After the growth of the first layer graphene on Cu, activated carbon radicals cannot be generated by metal catalyzed dissociation. Small graphene patches rather than enlarged graphene domains nucleate and coalesce into the second layer graphene, as shown in Figure 12c,d. Successive layer graphene can also be grown by the mechanism of nucleation and coalescence with smaller crystal size ≈10 nm. The growth of the graphene on noncatalyzed substrates can be fully dominated by nucleation and coalescence mechanism. As reported by Zhang et al. catalyst-free growth nanographene films can be achieved on noncatalyzed SiO 2 /Si, Al 2 O 3 , mica, silica and even glass with low growth temperature. [50,51] Nanographene islands with different sizes and heights first form in the growth process. With longer growth duration, more nanographene islands nucleate and coalesce into continuous films.
Etching and Growth Mechanism
In the nucleation and coalescence mechanism, nano graphene with domain sizes varying from few to tens of nanometers nucleate on the substrates. The nanographene should be enlarged by edge growth rather nucleation in order to obtain high-quality graphene films. Liu et al. present the competition of etching and growth mechanism in the catalyst-free growth of the graphene by PECVD. [100] In the mechanism, etching and growth processes are found to be dependent on the growth temperature with inverse trend. According to the mechanism, a two-step strategy is proposed, in which nucleation and edge growth occur in two isolated stages, as shown in Figure 13a. Nucleation at lower temperature followed by edge growth results into larger graphene domains that nucleation at higher temperature. It is expected that continuous polycrystalline graphene film with larger size graphene domain could be obtained via edge growth after low density nucleation process, as shown in Figure 13b-d.
Similar competition of etching and edge growth is also reported by Wei et al. in critical crystal growth of graphene. [52] H 2 plasma is known to etch the graphene from edges. [101] After moderate H 2 plasma treatment, macro-structural defects are removed and atomically smooth hexagonal configurations form at the edges, as shown in Figure 14a,b. The final edges show a preferential zigzag orientation after the H 2 plasma treatment according to the previous first-principles calculations. [102] The zigzag oriented smooth edges in moderate H 2 plasma can serve as the active sites for the crystal growth of graphene in the Figure 12. Schematic of growth mechanism on Cu substrate by PECVD. a) Monolayer growth at low growth temperature (500 °C). b) Monolayer growth with larger grain size at high growth temperature (900 °C). c) Secondary layer growth on the first layer. d) Successive layers growth. Reproduced with permission. [37] Copyright 2012, Elsevier.
www.MaterialsViews.com www.advancedscience.com Adv. Sci. 2016, 3, 1600003 Figure 13. a) Different two-step growth strategies for nanographene growth including isolated stages: nucleation and edge growth. The AFM images of a) nucleation at 560 °C, b) edge growth at 536 °C for 2 h, d) further edge growth at 510 °C for 2 h. Reproduced with permission. [100] Copyright 2014, Elsevier. [52] CH 4 /H 2 plasma. After growth, the zigzag configurations transform into armchair configurations, which is in agreement with previous theoretical studies, [102,103] as shown in Figure 14c. In order to reveal the critical crystal growth mechanism in atomic scale, scan tunneling microscopy (STM) studies are performed to characterize the edge structures. Several typical edge configurations including zigzag (Z1), armchair (AC11, AC22), and zigzagarmchair (Z-AC) can be observed for edge and edge , according to the stimulated STM patterns, as shown in Figure 14d. Moreover, pentagon-hexagon armchair edges (AC5-6) can be occasionally observed on newly grown edge, because π electron on pentagon edge atoms can be identified from electrons on hexagon edge atoms in STM images. It is believed that AC5-6 should be the transition state in growth process, as shown in Figure 14e. The competition between H 2 plasma etching and CH 4 plasma growth in c-PECVD is illustrated in Figure 13f.
Vertical Growth Mechanism
Wu et al. first report the successful synthesis of CNWs on sapphire substrate. [104] It is believed that the change of direction of electric field contributes to the formation of CNWs rather than CNTs. It is also reported that horizontally aligned CNTs can be grown on modified SiO 2 /Si, which can direct the electric field from plasma to the substrate surface. [105] Therefore, the direction of electric field is essential for the growth of VG nanosheets. However, the growth mechanism of VG nanosheet remains unambiguous. Jiang et al. [106] report that wafer-sized and uniform vertically standing graphene (VSG) films on Cu foil can be grown by using MP-CVD system. [106] In order to reveal the growth mechanism, the evolution of VSG films is monitored by varying the growth time. A multilayer graphene films with wrinkles or ripples can be observed with growth time of 1 min, as shown in Figure 15a,b. As the growth time increase to 2 min, the VG nanosheets can be observed in Figure 15c. This phenomenon suggests that VG film take place after 2D growth. Figure 15d shows the mechanism of the growth of the VSG films. First, the hydrocarbon is decomposed and absorb on the surface of the Cu, leading to the growth of 2D multilayer graphene films. Then the layer growth turns into vertical growth due to strain and defects accumulated in the as-deposited films. Moreover, MW plasma can ensure the 3D growth of vertical graphene nanosheets, because reactive carbon radicals generated in MW plasma would reach the edge frequently and thus diffuse outward. Although the VG nanosheets have been widely grown by PECVD on various substrates, the vertical growth mechanism of VG nanosheets needs further investigations. The electric field should be considered for the growth direction and the transition from 2D growth to vertical growth should be investigated in detail.
Application in Photovoltaic Devices
Graphene has potential applications in future photovoltaic devices, due to its high transparency (98%) and extremely low sheet resistance (≈25 Ω sq. −1 ). Till now, graphene films and graphene based composites have been exploited as transparent flexible electrodes in dye-sensitized solar cells [107][108][109] and organic photovoltaic devices. [110][111][112] The graphene materials are normally transferred www.MaterialsViews.com www.advancedscience.com Adv. Sci. 2016, 3, 1600003 Figure 15. a) SEM image of planar multilayer graphene films grown on copper foil. b) Magnified SEM image of panel (a). c) SEM image of VSG films with growth time increasing to 2 min. d) Schematic illustration of the growth process of VSG films on copper foil in an MP-CVD system. Reproduced with permission. [106] www.MaterialsViews.com www.advancedscience.com Adv. Sci. 2016, 3, 1600003 to desired substrates after CVD growth on the surface of transition metals. The contamination, wrinkles and breakage cannot be avoided during the mechanical transfer process, which limit the applications as the transparent electrodes in photovoltaic devices. It has been reported that nanographene films can be synthesized on arbitrary substrates at low temperature by PECVD with high transmittance (85%-92%) and relatively low sheet resistance (40-7 kΩ sq. −1 ). [50,51] Further investigations on the applications as transparent electrodes are required. Moreover, the sheet resistance of nanographene films remain higher than CVD grown graphene on transition metals due to abundant edges and small crystalline size. Reducing the density of nucleation by reasonable synthetic strategies is required for low sheet resistance.
Carbon based heterostructure junction solar cells have been reported in many previous studies. [113][114][115][116][117][118] Various heterostructures (Schottky and p-n junction) have been successfully fabricated based on amorphous carbon/n-Si, [113] CNTs/n-Si [114,115] and graphene/n-Si. [116][117][118] The graphene based Schottky junction is more favorable due to its large built-in field (0.55-0.75 V) and high charge separation efficiency. However, there remain many challenges in improving the power conversion efficiency (PCE) of graphene based Schottky junction solar cells. The use of highquality graphene and reasonable device fabrication can improve the performance of devices. Graphene/Si Schottky junction solar cell was fabricated by directly growing graphene-graphitic films on Si substrate by PECVD. [118] However, the PCE of graphene/Si Schottky junction solar cell was found to be 0.078% due to the poor quality of graphene-graphitic films. Introducing effective etchant can remove defective structures from as-grown graphene-graphitic films, which may contribute to improving the performance of the graphene based Schottky junction solar cell.
Graphene nanowalls (GNWs) are networks of graphene sheets and can also be used in carbon-based solar cells. [119][120][121][122][123] PECVD grown and plasma post treated GNWs have been used as a counter electrode in dye-sensitized solar cell (DSSC). [120] The DSSC with as-deposited and H 2 plasma treated GNWs showed an PCE of 1.64 % and 2.23% respectively. The reason for increase of energy conversion efficiency is the reduced sheet resistance due to H 2 plasma treatment. The GNWs/Si heterojunction solar cells have similar device structures with graphene based Schottky junction solar cells, as shown in Figure 16a. [123] By directly growing GNWs on Si substrate via PECVD, GNWs/Si hetero-junction solar cell could be fabricated with PCE of 3.1% and the energy conversion efficiency was increased up to 5.1% after chemical modification. The PCE of the GNWs/Si heterojunction solar cell can be enhanced by extending the growth time or p-type doping, as shown in Figure 16b,c. As for GNWs based electrodes, its vertical orientations and wall-like structures provide large surface area and reactive sites, which makes GNWs excellent electrodes. However, the high sheet resistance is one of the main challenges in the applications of GNWs as electrodes.
Application in FETs
Graphene is a 2D material with ultrahigh mobility and ambipolar field effect. The room temperature field effect mobility of graphene based transistors have been proved to be as high as Reproduced with permission. [123] Copyright 2015, American Institute of Physics.
15 000 cm 2 V −1 s −1 , indicating promising applications in digital logic devices and high frequency devices. [1,124] Although mechanical exfoliated graphene has the highest quality, it cannot be used in integrated device fabrication. PECVD can achieve the controllable synthesis of the large-area graphene on dielectric substrate. Recently, graphene has been grown on various substrates containing metals SiO 2 /Si sapp, hire, mica, h-BN for applications in FETs. [34,39,50,52,64,125] PECVD grown graphene films on transition metals have similar growth process (dissolution precipitation and surface catalyzed dissociation) with thermal CVD grown graphene films, thus have comparable field effect motilities. Although the growth temperature of PECVD is lower than thermal CVD, the graphene films still need to be transferred from transition metals to dielectric substrate for applications in FETs. Nanographene films can be directly grown on dielectric substrates by PECVD. Two-terminal nanographene field-effect devices can be directly fabricated without post transfer process. However, small crystalline size or edge defects formed in nucleation and coalescence process result in field effect motilities as low as 15 cm 2 V −1 s −1 . Moreover, nanographene field-effect devices show weak gate modulation and low on/off ratio. More reasonable synthetic methods are required for graphene films with less boundaries and edges.
Catalyst-free crystal growth of graphene on SiO 2 /Si was observed in the C 2 H 4 /H 2 plasma. [52] HGCs can be grown on SiO 2 /Si with the size of about 1 μm. FETs based on HGCs show high mobility in the range of 550-1600 cm 2 V −1 s −1 , as shown in Figure 17a. It can be seen that the FETs of PECVD grown graphene have comparable mobility values to Cu-CVD graphene and peel-off graphene. For the applications in FETs, the electrical characteristic graphene should be modulated to be p-type or n-type. The amorphous nitrogen doped carbon film has been obtained with NH 3 /CH 4 mixtures by PECVD. [126] However, the amorphous nitrogen doped carbon film normally suffers from poor electrical transport and gate modulation due to its disordered structure. Recently, the crystal growth of nitrogen-doped graphene (NG) has been achieved in NH 3 /CH 4 plasma. [125] The FETs of NG are fabricated using the growth substrate as the dielectric and gate electrode. An obvious negative shift of Dirac point could be observed in the transfer curve of NG FETs, indicating typical n-type semiconductor behaviors, as shown in Figure 17b. The field effect mobility of NG is in the range of 100-400 cm 2 V −1 s −1 , higher than amorphous nitrogen doped carbon film (10 cm 2 V −1 s −1 ) and comparable to NG (200-450 cm 2 V −1 s −1 ) grown on transition metal. The defect-free crystalline structure formed the etching and growth process contributes to the outstanding electrical properties of NG films, which shows potential in the future FETs.
However, the mobility of HGCs and NG grown by PECVD is still far lower than that of single crystalline graphene. [16] Further improvement in the quality of HGCs and NG is also required. The higher mobility can be achieved by using better substrate without charged traps instead of amorphous SiO 2 /Si substrates. h-BN is an excellent substrate with atomically smooth surface and small lattice mismatch with graphene. The similar 2D structure and Van der Waals between h-BN and graphene favor the epitaxy growth in PECVD process. The mobility of graphene films grown on h-BN by PECVD show the higher mobility of about 5000 cm 2 V −1 s −1 , compared with graphene grown on SiO 2 /Si substrates. [64] Therefore, dielectric substrates is critical for the further improvement in the quality of graphene grown by PECVD.
Application in Supercapacitors
Supercapacitors (electric double-layer capacitors and pseudocapacitors) have attracted much interest as a new energy storage device because of excellent charge/discharge rates, long cycle life, and high power density. [127][128][129][130] Electric double layer capacitors (EDLCs) operate based on rapid separation and adsorption of ions on the surface of the active materials. Porous carbon materials with high specific surface area such as activated carbon (AC), mesoporous carbon, and CNTs have been widely used as active materials in supercapacitors. [131][132][133] GNWs is developed as an alternative supercapacitors' active materials in view of its high surface area, high conductivity and low contact resistance Its unique vertical structure facilitate the diffusion of ions. [82,[134][135][136] Compared with other porous materials (ACs and graphene stacks), GNWs-based supercapacitors show excellent capacitive behaviors even at relatively high frequencies.
For example, a high frequency (120 Hz) alternating current (ac) line-filtering can be realized with CNWs-based supercapacitors due to ultrafast dynamic response. [137] Later on, EDLC which can operate at kilohertz alternating current is also reported using VG nanosheets grown on nickel foam collectors. [138] Recently, vertical graphene nanosheets (VGNSs) was directly synthesized on Ni foams with natural precursor butter at low temperature. [136] The as-grown VGNSs Figure 17. a) Transfer characteristics of FETs based on H 2 /CH 4 plasma-enhanced grown graphene (black), Cu-CVD graphene (red), and peel-off graphene (blue). Reproduced with permission. [52] 2013, Wiley-VCH. b) Transfer characteristics of eight FETs based on CH 4 /NH 3 plasma-enhanced grown NG and an FET based on H 2 /CH 4 plasma-enhanced grown pristine graphene. Reproduced with permission. [125] Copyright 2015, American Chemical Society. c) The impedance spectra of VGNSs grown at 40% H 2 (black) and 80% H 2 (red). d) The cycle stability of VGNSs grown at 40% H 2 (black) and 80% H 2 (red). Reproduced with permission. [136] Figure 19. Schematic of a) graphene based strain sensor. The process of applying strain to the devices is shown in the picture. Reproduced with permission. [142] 2012, American Institute of Physics. b) Flexible GNWs/PDMS temperature sensor. Two-terminal device is fabricated by brushing two Ag paste on sides of CNWs. Reproduced with permission. [144] 2015, Royal Society of Chemistry. c) VG biosensors. Anti-IgG is anchored to VG nanosheets surface through Au nanoparticles. Reproduced with permission. [145] Copyright 2013, Natue Publishing Group. adhered to the Ni foams without using nonconductive polymeric binder. The supercapacitors based on VGNSs exhibit high specific capacitance 230 F g −1 at a scan rate of 10 mV s −1 and negligible capacitance after 1500 cycles at high current density, as shown in Figure 18a,b. Furthermore, the morphology and structure of VGNSs can be adjusted by modifying the plasma power, gas precursor and growth parameters, which leading to various capacitive behaviors. [75,136] It can be suggested that thinner edge planes and higher graphitization contribute to a lower charge transfer resistance and better specific capacitance, [136] as shown in Figure 18c,d. The specific capacitors can be improved by the combination of 1D CNTs and 2D VGs. [135] In contrast to EDLCs, pseudocapacitors work by reversible Faradaic-type redox reactions of ions. Although the pseudocapacitors usually have a higher specific capacitance, devices suffer from a lower power density and poor cycling stability compared with EDLCs. High-performance pseudocapacitors can be realized by combining the VG and metal oxides or electrically conducting polymer. For example, pseudocapacitors with hybrids MnO 2 -VG showed similar electrochemical behaviors as EDLCs. Similar applications of VGs decorated with MnO 2 of diverse morphologies and other transition metal oxides have also been reported. [139,140]
Application in Sensors
Recently, graphene based materials have been widely studied for sensing applications, e.g., in strain sensors, [52,[141][142][143] temperature sensors, [144] biosensors, [145][146][147][148] and gas sensors, [72,81,149] as shown in Figure 19. Nanographene films grown by PECVD have been transferred onto polydimethylsilxane (PDMS) with prestrain. [51] It is found that the resistance of rippled graphene on PDMS linearly increases with applied strain. The strain sensors based on nanographene films could sustain a high tensile strain over 30% due to its high flexibility. Another advantage of nanographene based strain sensor was that gauge factor could be varied in the range from 10 to 10 3 by adjusting the growth temperature because higher temperature resulted in higher nucleation sites, leading to a higher gauge factor. [143] Wearable temperature sensors have been widely used in applications, such as electronic skins, robot sensors and human-machine interface. CNWs transferred onto PDMS substrates are found to be three orders higher temperature coefficient of resistivity than other graphene materials due to excellent stretch ability of GNWs and large expansion large of PDMS. [144] Thus, wearable temperature sensor Figure 20. a) Schematic diagram of the SKPM measurement process for the Al 2 O 3 /nanographene/SiO 2 structure. Inset: a typical AFM image of the as-grown nanographene. b) High-frequency CV characteristics under different gate voltage sweepings. c) Data retention for CTM. Reproduced with permission. [151] Copyright 2013, Nature Publishing Group. could be fabricated by combination of CNWs and PDMS, as shown in Figure 19(b). The device has precise temperature measurement with intervals of 0.1 °C from 35 to 36 °C indicating that it could be used in monitoring the human body temperature.
The carbon nanomaterials normally achieve high sensitivity in the detection of bimolecular due to its extremely sensitive surface. The biosensor has been fabricated by direct growth of VG sheets on the sensor electrode through PECVD. [145] After deposition of Au NP-antibody conjugates on the VG surface, the device exhibits a significant change in the electrical conductivity when binding with target protein. Compared with drop-casting method, the biosensor shows higher stability and repeatability with selective detection to specific protein. The one-step method to prepare VG nanosheets based biosensors holds huge potential in scalable fabrications.
When absorbing gas molecules (CO, NO 2 , H 2 O, or NH 3 ) on the surface, graphene based devices also show increase or decrease in conductivity. GNWs fabricated on metal electrodes by dc plasma-enhanced CVD can respond to relatively low concentrations of NO 2 and NH 3 , suggesting a low cost effective method to fabricate large-scale gas sensors. [72]
Application in Charge Trapping Memory
CTM is nonvolatile flash memory based on the insulating charge storage layer. Due to the low-dimension, chemical stability, and high-work function, graphene is considered to be a potential candidate material for memory application. [2,150] Due to the abundant edges, chemical and thermal stability, low cost, and capability with complementary metal oxide semiconductor (CMOS) devices, nanographene is considerate to be a good candidate as charge trapping material. Zhang et al. have reported nanographene based CTM. [151] This novel nanographene charge-trapping layer CTM includes the following layered structures: heavily doped substrate (p-Si substrate), tunneling layer (4 nm SiO 2 ), charge storage layer (nanographene), blocking layer (4 nm Al 2 O 3 ) and gate electrode, as shown in Figure 20a. The feasibility of nanographene as trapping material is investigated using the scanning Kelvin probe microscopy (SKPM) to test the surface potential variation. The results show that nanographene growth by PECVD has highly controllable charge trapping capacity with large trapping density, ultrathin thickness, and well uniformity.
The capacitance-voltage (CV) characteristics under different sweep voltages are shown in Figure 20b. From the data, CV curves reveal different memory window under different dual-direction gate voltage sweeping. When the voltage sweep from −8 V to +8 V, a large memory window of 4.52 V can be obtained. The large memory window proves that the nanographene have the effect on the charge storage. Contrast data without nanographene shows no memory window which further proves the charge trapping effect of nanographene. Figure 20c depicts the data retention characteristics at room temperature. A 2.52 V memory window shrinks to 2.23 V after 10 4 s. A charge loss of 44% after 10 years' operation is also predicted from data which may be caused by tunneling of neighbor nanographene.
Summary and Outlook
In summary, we discuss controllable synthesis of graphene and its derivatives by PECVD and its related applications. Both 2D graphene and VG nanosheets have been synthesized by PECVD on various substrates. Compared with thermal CVD, the growth temperature could be adjusted to be compatible with the level of the Si-based electronics with the aid of the plasma. More importantly, the successful growth of graphene materials on dielectric, conducting and semiconducting substrates promotes the applications in FETs, sensors, energy conversion and storage devices. However, it remains challenging to realize practical applications of PECVD grown graphene. (i) wafer-scale synthesis of high-quality graphene on dielectric substrates would promote its applications in electronics, but PECVD growth rate is too low for industrial-scale production. For example, the growth rate of critical crystal growth of graphene is below 10 nm min −1 . (ii) The growth of single crystalline graphene contributes to higher motilities and more stable electronic properties. Recently, millimeter-sized single crystalline graphene domains have been achieved on transition metals by conventional CVD, but large domain size of single crystalline graphene has not been synthesized on dielectric substrates by PECVD. (iii) The performance of graphene based devices can be further improved by using more favorable 2D materials. For example, h-BN is a superior insulting substrate for graphene based device. The graphene grown on h-BN exhibits higher mobilities with small band gap. Moreover, atomic thin h-BN also can be used as the tunnel barrier in vertical graphene device. Besides h-BN, other 2D materials should be explored as the substrates for graphene growth. (iv) VG nanosheets, as a 3D graphene network, have been widely used in various applications, such as photovoltaic devices, supercapacitors and sensors. The controllable synthesis of VG nanosheets with different morphologies and structures benefits the improvement in device performance. In the future, systematic studies are required toward better controllability. In conclusion, PECVD is a more promising method for controllable synthesis of graphene and its derivatives, and should be further explored. | 2018-04-03T03:54:36.412Z | 2016-05-17T00:00:00.000 | {
"year": 2016,
"sha1": "7126751deb003583e3d12b3f4b5e6f582a81c220",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.201600003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7126751deb003583e3d12b3f4b5e6f582a81c220",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
220654139 | pes2o/s2orc | v3-fos-license | Pathogenesis and Management of Myocardial Injury in Coronavirus Disease 2019
Abstract The outbreak of coronavirus disease 2019 (COVID‐19), caused by severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) infection, has become a major health crisis and a worldwide pandemic. COVID‐19 is characterized by high infectivity, long incubation period, diverse clinical presentations, and strong transmission intensity. COVID‐19 can cause myocardial injury as well as other cardiovascular complications, particularly in senior patients with pre‐existing medical conditions. The current review summarizes the epidemiological characteristics, potential mechanisms, clinical manifestations, and recent progress in the management of COVID‐19 cardiovascular complications.
Introduction
In December 2019, a virus-associated disease, predominately characterized by pneumonia, emerged and quickly spread around the world. The disease outbreak has triggered a major health crisis in many countries throughout the world and is now named coronavirus disease 2019 (COVID-19) officially by World Health organization 1 . The pathogen causing COVID-19 has been attributed to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a novel coronavirus closely related to severe acute respiratory syndrome coronavirus (SARS-CoV) 2 .
SARS-CoV-2 is the third member of coronaviruses family known to cause life-threatening disease, following SARS-CoV and middle east respiratory syndrome Accepted Article This article is protected by copyright. All rights reserved.
COVID-19 associated myocardial injury
The cardiovascular system is highly vulnerable to the tissue injury caused by . The pathogenesis of myocardial injury has been demonstrated by recent autopsy reports from different investigators 9 10 . The exact mechanism for the development of COVID-19 cardiovascular complications has not been fully understood. COVID-19 may damage the heart directly or indirectly or both (Figure 1).
The occurrence of myocardial injury is generally diagnosed when the serum levels of troponin I/T (TnI/T) increase above the 99th percentile upper reference limit after excluding TnI/T elevation related to obstructive coronary artery disease, according to the fourth universal definition of myocardial infarction 9 . The incidence of myocardial injury in COVID-19 ranges from 7.2% to 40.9% in general cohorts 5,[10][11][12][13][14][15][16][17][18] . TnI/T elevation appears much more striking in severe patients and non-survivors [19][20][21] . Recent reports on the prevalence and description of myocardial injury are summarized in Table 2. In a recent study enrolling 191 patients, the myocardial injury occurred in 46% of non-survivors, compared with only about 1% of discharged patients 11 . Myocardial injury appears to serve as an independent risk factor for the severity and mortality of COVID-19, with reported hazard ratio ranging from 4.3 to 8.9 12,13,22 , and odds ratio from 6.6 to 26.9 14,23,24 in different studies. Furthermore, some of the deceased patients show dynamic elevation of TnT levels during hospitalization, whereas discharged patients or survivors showed no change in TnT levels, suggesting that Accepted Article This article is protected by copyright. All rights reserved. 6 aggravated myocardial injury is associated with adverse COVID-19 prognosis 15,19 .
Other adverse cardiovascular events in COVID-19
In addition to the direct myocardial injury caused by viral infection, other cardiovascular complications may occur in COVID-19, in particular acute vascular events, which may contribute to the development of myocardial injury and dysfunction.
Accepted Article
This article is protected by copyright. All rights reserved.
Thromboembolic complications
In a study of 184 intensive unit care (ICU) patients with COVID-19, up to 31% of them presented with thromboembolic complications, including 27% with venous thromboembolism and 3.7% with arterial thrombotic events, even though all patients had received standard doses of thromboprophylaxis 29 . In a New York hospital, reportedly, several young and previously healthy COVID-19 patients were found to suffer from an abnormally high incidence of stroke 30 . In another study, patients diagnosed as disseminated intravascular coagulation (DIC) accounted for 71.4% of non-survivors but occurred only in 0.6% of survivors 31 . Activated partial thromboplastin time (APTT) and prothrombin time (PT) were decreased in 16% and 30% of COVID-19 patients 32 . Moreover, levels of D-dimer were significantly higher among those who died of COVID-19 versus those who survived and a subsequent study identified an abnormal D-dimer level (>1 μg/L) as a major risk factor for death 11 , also suggesting that coagulation abnormality contributes to mortality. The above findings reveal a high prevalence of thromboembolic events during the development of COVID-19, and support the necessity of closely monitoring the status of coagulation and thrombosis in hospitalized patients 33 .
Acute heart failure
Acute heart failure represents another common cardiovascular complication of COVID-19, especially in patients undergoing clinical deterioration 11,18,34 . In the history of deceased patients, plasma NT-proBNP levels often showed dynamic Accepted Article This article is protected by copyright. All rights reserved. elevation during hospitalization, revealing the close relationship between cardiac dysfunction and disease severity 15 . The new-onset myocardial injury is partly responsible for acute cardiac failure, described by several case reports as discussed above 25,26 . Clearly, with COVID-19 progression, there is a positive correlation between the levels of NT-proBNP and TnT (R 2 =0.376, P<0.001) 15 . Nearly 50% of COVID-19 patients developing heart failure had a history of cardiovascular diseases, suggesting that the decompensation of underlying cardiovascular conditions also strongly contributes to the acute cardiac dysfunction 18 .
Arrhythmia and cardiac arrest
An increased risk of arrhythmia has been recently reported in patients with SARS-CoV-2 infection 13 . Electro physiologically, COVID-19 patients are prone to the development of tachycardia. A study of 112 COVID-19 patients reported that 29.5% of patients presented with tachycardia, and other evidence for myocardial injury 13 .
The heart rate of COVID-19 patients appeared to be correlated with troponin levels, suggesting the link of tachycardia to myocardial injury in COVID-19. In addition to tachycardia, malignant arrhythmia and following cardiac arrest may occur in COVID-19 patients, specifically in those with myocardial injury 15 , which might lead to sudden cardiac death, reportedly responsible for 11.1% of deaths 35 . A French population-based study found that the out-hospital cardiac arrest incidence in the COVID-19 pandemic increased two times over that in the same weeks in the non-pandemic period 36 , and a third of the increase was caused by suspected or
Accepted Article
This article is protected by copyright. All rights reserved.
Mechanism underlying myocardial injury in COVID-19
Despite the high incidence of myocardial injury in COVID-19 patients, exact mechanisms underlying the pathogenesis of cardiac injury and dysfunction remain largely unclear. Molecular and cellular evidence and clinical data have disclosed multifactorial events and pathways which likely trigger or accelerate the micro-and macro-process of myocardial injury. The viral infection may provoke multiple pathogenic factors, which may directly or indirectly cause the impairment of cardiovascular cells by SARS-CoV-2 infection, as illustrated in Figure 1.
SARS-CoV-2 host cell invasion through surface ACE2 receptor
Similar to SARS-CoV, SARS-CoV-2 invades host cells through viral spike protein (S protein) binding to the ACE2 surface receptor 37 (Figure 2). Known as a negative regulator of the renin-angiotensin system (RAS), ACE2 plays a regulatory role in counter-balancing the bioactivity of ACE 38 . It can also initiate outside-in signaling as a membrane protein 39 .
The specific cellular mechanism, by which SARS-CoV-2 damages cardiomyocytes, has not been clarified completely. SARS-CoV-2 shares a similar biological pathway with SARS-CoV. Both viruses rely on type II transmembrane serine proteases (TMPRSS2), another protein expressed on the cellular membrane, to cleave S protein and expose the receptor-binding domain to bind with ACE2. This S protein-ACE2
Accepted Article
This article is protected by copyright. All rights reserved.
binding leads to the endocytosis of virus particles 40 and may be followed by the downregulation of ACE2 expression in cardiomyocytes 41, 42 , and the over-activation of RAS. The downregulation of ACE2 associated with SARS-CoV infection is partly caused by the shedding of ACE2 ectodomain, mediated by TNF-α-converting enzyme (TACE) in coupling with the production of TNF-α, a well-known factor for pro-fibrosis and myocardial damage 43 . The Ras-ERK-AP-1 pathway may be triggered, as well as the activation of the C-C motif chemokine ligand 2 (CCL2, a pro-fibrosis factor) 44 .
The above theories are supported by an autopsy report regarding SARS, showing the presence of SARS-CoV in the heart associated with marked down-regulation of ACE2 expression 42 . The interstitial fibrosis was observed in the heart tissues of SARS and COVID-19 patients with myocardial injury, implying the involvement of following pro-fibrotic effect 25,42 . However, even though the identification of viral particles and viral genetic materials in the myocardium of COVID-19 cases has offered pathological evidence of viral myocardities 25,45 , further investigation is needed to determine expression of ACE2 and its interactions with the virus and host cell components, which may help clarify the impacts of changed cellular ACE2 levels on the myocardial injury driven by SARS-CoV-2 infection.
Hypoxia and ischemic injury
Pulmonary inflammation and dysfunction caused by SARS-CoV-2 infection limits the oxygen-blood exchange, and triggers hypoxemia, hypotension, and even septic Accepted Article This article is protected by copyright. All rights reserved.
shock 46 . Consequently, insufficient oxygen supply may occur in vital organs, including the heart. Concomitantly, myocardial oxygen demand may be elevated by heightened temperature and high myocardial metabolic rate that augments the inflammatory burden and imbalance between oxygen supply and consumption 47 .
Along with COVID-19 progression, this imbalance is increasingly aggravated and worsened by the development of metabolic acidosis, fluid or electrolyte disorder, and dysfunction of the neuro-humoral system 25 . Thus, myocardial injury in COVID-19 patients may be indirectly triggered or augmented, especially in those with pre-existing cardiovascular disorders and compromising myocardial reserve capacity, which may have already exhausted on the supply side 15 .
Abnormal coagulation and microcirculatory disturbance
In theory, SARS-CoV-2 may directly attack vascular endothelial cells, which also express high levels of ACE2 48 , leading to abnormal coagulation and microcirculatory disturbance. Intramural microvascular blood flow may be altered, causing regional ischemia, followed by focal myocardial injury and cardiac dysfunction 49 . Recent study 50 has shown that COVID-19 patients with DIC have a high incidence of myocardial injury. However, the detrimental effects of abnormal coagulation and microcirculatory disorders in myocardial injury need to be proved by further pathological evidence. Inflammation of small vessel walls and diffused microcirculatory thrombosis have been identified in liver and lung biopsy specimens from COVID-19 patients, However, so far, there has been a lack of convincing Accepted Article This article is protected by copyright. All rights reserved. evidence in the heart 51,52 .
Cytokine Storm
Previous studies have confirmed that immune abnormalities contribute to many pathological changes in SARS-CoV and MERS-CoV infection 7, 53 . Specifically, the cytokine storm represents excessive and uncontrollable cytokine production in response to virus invasion and one of the main contributors to the pathogenic injury to the heart. The levels of serum pro-inflammatory cytokines [e.g., Interleukin-1β (IL-1β), IL-6, interferon-γ (IFN-γ)] are markedly increased in COVID-19 patients and associated with disease progression 5 . Interestingly, in the cytokine storm, Th2 anti-inflammatory cytokines, such as IL-4 and IL-10, are reportedly at high levels , and even related to the COVID-19 severity 18 . Asymptomatic patients exhibited lower levels of both pro-and anti-inflammatory cytokines than the symptomatic group, suggesting the pathogenic role of cytokines 54 .
Among the inflammatory cytokines from anti-viral immune responses, IL-6 serves as a core component of the cytokine storm, expressing at significantly higher levels in COVID-19 patients with severe conditions and adverse prognosis, compared with those without 11,18,34 . IL-6 not only amplifies the cytokine storm by stimulating the production of other pro-inflammatory cytokines but also promoting vascular leakage and interstitial edema 55 . Moreover, IL-6 weakens papillary muscle contraction and causes myocardial dysfunction 56 . Increased levels of IL-6 occurred in many hospitalized COVID-19 patients, significantly associated with high hs-TnI levels 57 .
Accepted Article
This article is protected by copyright. All rights reserved.
Clinical profiles of COVID-19 associated myocardial injury
The COVID-19 associated myocardial injury occurs more frequently in the elderly with pre-existing cardiovascular comorbidities or risk factors, e.g., diabetes, hypertension, coronary heart disease, and chronic kidney disease 12,15 , which are known independent risk factors for heart disease 22 . Given that pre-existing CVD and Accepted Article This article is protected by copyright. All rights reserved.
in-hospital myocardial injury are both key determinants of COVID-19 fatality 12, 58 , it is not surprising that COVID-19 patients with the adverse conditions possess the highest mortality (69.4%), compared with patients without myocardial injury but with underlying CVD (13.3%) and patients with myocardial injury but without underlying CVD (37.5%), while the mortality in patients without myocardial injury or underlying CVD is the lowest (7.62%) 15 . However, these cardiac image changes may be, to a certain degree, attributable to pre-existing cardiac disorders 13 . Besides that, the abnormalities on echocardiography were mainly a small amount of pericardial effusion 13 droplets in endomyocardial specimens of COVID-19 patients 25 . Focal, mainly perivascular interstitial fibrosis, and large (>20 μm), vacuolated, CD68-positive macrophages with coronavirus particles inside were also found in the myocardium.
However, so far, there has been no convincing evidence of cardiac intramural microcirculation dysfunction or thrombosis in COVID-19. Future researches are required to clarify the histopathologic characteristics of COVID-19-related myocardial injury.
Strategies for targeting cardiovascular complications
To date, treatment of COVID-19 has been mostly restricted to supportive care measures as few specific therapeutics have been available to treat this disease.
Pre-existing poor-health conditions make patients more vulnerable to infection-induced cardiovascular complications, thus increasing related mortality risk 15 . Therefore, senior patients who have underlying cardiac conditions are highly vulnerable to COVID-19 cardiac injury, and they should be prioritized for clinical care.
Regarding diagnostic criteria, the abnormal levels of myocardial biomarkers, especially TnI/T, constitute the main criteria to identify COVID-19 patients with myocardial injury. However, TnI/T changes may be affected by other determinants, such as the infection status, hypoxia, and renal insufficiency, which are commonly
Accepted Article
This article is protected by copyright. All rights reserved.
observed with the development of COVID-19. The -rise-and-fall‖ pattern of TnI/T is also seen in patients with acute coronary syndrome (ACS). There may be a longer waiting period from the first symptom onset to receiving medical care during the COVID-19 pandemic than in the non-pandemic period. Hence, a comprehensive assessment of the heart function in COVID-19 patients should be performed using electrocardiography, imaging, and laboratory testing for proper clinical judgment in patients with abnormal TnI/T levels. However, even after comprehensive examinations, sometimes it remains hard to differentiate ACS from other TnI/T elevating conditions associated with COVID-19 59 . Therefore, it is essential to promptly perform coronary angiography and continue necessary primary percutaneous coronary intervention (PCI) for patients with suspected ACS. The primary PCI procedures used for ACS patients is favorably suitable to COVID-19 patients, even though elective coronary procedures in the catheterization laboratory are recommended to be temporarily suspended due to the COVID-19 pandemic 62,63 .
Considering the high prevalence of thromboembolic complications in COVID-19 patients, it is essential to actively take prophylactic anticoagulation during hospitalization for the management of COVID-19, especially for severe COVID-19. A recent study involving 449 severe COVID19 patients found that treating with unfractionated heparin or low molecular weight heparin for at least 7 days could significantly reduce the mortality in patients meeting the criteria for sepsis-induced DIC, or in patients with markedly elevated D-dimer 64 progression, such as pro-arrhythmic medication effects, or electrolyte disturbances, identify and correct potential risk factors should be done in a high priority.
Accepted Article
This article is protected by copyright. All rights reserved.
Anti-viral therapies
Since the outbreak of COVID-19, several anti-virus agents have been proposed and are currently under clinical investigation. Among them, the most hopeful one is remdesivir. This broad-spectrum investigational antiviral agent that was initially developed for treating Ebola virus infection but failed to show satisfactory efficacy in clinical trials 67 . In the first randomized controlled trial (RCT) regarding COVID-19, remdesivir showed little clinical benefit compared with placebo for serious COVID-19 patients 68 . However, this trial was terminated early, so it is underpowered to draw any definite conclusion. The second RCT enrolling 1063 participants showed that remdesivir is superior to control treatment in shortening the time to recovery (11 days vs. 15 days, P<0.001) and alleviating respiratory tract infection in adults hospitalized with COVID-19 69 . There is no significant difference in the mortality between the groups receiving remdesivir and placebo 69 . Nonetheless, remdesivir has offered new insight into the therapeutic approaches against the current global COVID-19 crisis.
Anti-inflammatory and immunoregulatory agents
The pivotal role of immunologic overresponse in COVID-19 prompts anti-inflammatory therapy to be studied for treating COVID-19. Hydroxychloroquine and chloroquine are traditional anti-malarial drugs that can efficiently control the SARS-Cov-2 replication in vitro 70,71 . The first study about hydroxychloroquine treatment regarding COVID-19 treatment is a small open-label, non-randomized study,
Accepted Article
This article is protected by copyright. All rights reserved.
20
in which hydroxychloroquine administration was significantly associated with viral load reduction/disappearance 72 . However, a double-masked non-randomized trial has yielded conflicting results 73 , and increased prolongation of the QT interval was observed in patients who underwent hydroxychloroquine treatment 73,74 .
Hydroxychloroquine administration was not associated with a lowered risk of intubation or death in an observational study involving 1446 patients 75
Accepted Article
This article is protected by copyright. All rights reserved.
Targeted anti-inflammatory therapies, such as IL-6 blockade, have also been viewed as the potential therapeutic option, given the pivotal role of cytokine storm in the pathogenesis of COVID-19 and its cardiovascular complications. The anti-IL-6 receptor monoclonal antibody, tocilizumab, has been reported to quickly control fever and improve respiratory function of 21 severe COVID-19 patients 84 . However, an Italian RCT found that treatment with tocilizumab failed to reduce severe respiratory symptoms, intensive care visits, or death in patients with early-stage COVID-19 85 .
Thus, there appears to be controversy regarding the efficacy of anti-IL-6 therapy in the COVID-19 cohorts. More data from patients in advanced-stage and severe conditions are hopefully coming up from ongoing RCT 86 .
Regulators of angiotensin activities
The structural evidence of SARS-Cov-2 entering the cell via ACE2 has led to the hypothesis that angiotensin-converting enzyme inhibitor (ACEI) and angiotensin receptor blockade (ARB) may potentially induce the overexpression of ACE2, and subsequently increase susceptibility to SARS-Cov-2 infection and aggravate disease severity 87 . However, ACEI/ARB appears to play a protective rather than a harmful role in COVID-19, given that the SARS-CoV-2 invasion may result in the activation of the RAS axis, which is partly responsible for severe organ injury of COVID-19 41 .
Potential mechanisms in detail have been summarized elsewhere 88 . Growing evidence showed that COVID-19 patients under ACEI/ARB treatment had similar and even better clinical prognosis than those not 15,16,89,90 . While several clinical trials are
Accepted Article
This article is protected by copyright. All rights reserved.
22
looking for compelling evidence proving the usefulness and safety of ACEI/ARB in COVID-19 91,92 , it is not recommendable to alter the routine anti-hypertensive therapy in COVID-19 patients.
Conclusion
In the COVID-19 pandemic, patients with pre-existing medical conditions are vulnerable to myocardial injury as well as other cardiovascular complications.
Accepted Article
This article is protected by copyright. All rights reserved.
23
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. SARS-CoV-2 invasion is mediated by the S protein binding to its ligand ACE2, which is primed by TMPRSS2 through cleaving S protein into S1 and S2 subunit to facilitate the exposure of RBD on S1 subunit The binding of RBD to ACE2 is followed by
Accepted Article
This article is protected by copyright. All rights reserved. | 2020-07-21T13:05:32.266Z | 2020-07-19T00:00:00.000 | {
"year": 2020,
"sha1": "b4f4b7f1f89b301ba73c713ea9b4a36620adb698",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ejhf.1967",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ecf3f5f8366c3fcb492440f0eb7bea2e2209af01",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249926729 | pes2o/s2orc | v3-fos-license | Properties of stress field and differential equation of motion
. In continuum mechanics, the stress and the divergence of stress tensor are two vectors introduced to describe the internal force distribution and the resultant force of stress on material element, respectively. This study discusses the properties of the vectors under the basic assumption of classical continuum mechanics. To analyze the properties of the two vector fields, a vector whose gradient is the stress tensor is introduced. Then the stress and the divergence of the stress tensor are expressed with the introduced vector. The results show that the stress field is a curl free field when the stress tensor is symmetric and the traditional understanding on the divergence of stress tensor exceeds the limitation that stress field is a curl free field. Based on the conclusions, this paper further analyzes the motion of material element described with differential equation of motion, the shear wave in elastomer and viscous flow in fluid. It is concluded that the traditional understanding on the differential equation of motion is beyond Newton's second law of motion and the motion of material elements exceeds the assumption of particle motion. Then, the study proposes a new differential equation of motion to derive the traditional defined shear wave in elastomer and viscous flow in fluid. The results show that shear stress reciprocity is unnecessary in classical continuum mechanics, and the viscous force defined by Newton is essentially different from that based on deformation theory.
Introduction
Continuum mechanics studies the motion, deformation and failure of deformed media such as fluids and solids under the continuum hypothesis, where real fluids and solids are considered to be perfectly continuous and are paid no attention to their molecular structure [1,2,3]. The subject is the basis and framework of engineering science. With the continuous development of engineering and technology, continuum mechanics has been fully applied in aerospace [4,5], information technology [6,7], biomedical engineering [8,9], micro/nano technology [10,11,12] and other fields. At the same time, the application of continuum mechanics in these fields promotes its development.
Continuum hypothesis enables the stress to be defined to describe the internal force distribution and the equilibrium for free body in continua via the powerful methods of calculus [1,3]. In order to conveniently describe the stress on bounding surface of free body and the equilibrium for the free body whose volume tends to zero under the resultant force, the stress tensor is introduced into continuum mechanics [1,2,3,13]. The introduce of stress tensor also brings convenience in describing the relationship between stress and deformation of continua. Since continuum mechanics is believed to be the branch of classical mechanics, the motion of nonzero-volume elements constituting continua is treated as the motion of particles and their dynamics is believed to be described with Newton's second law of motion in the initial configuration [1,2,3,14]. Under the assumption, the stress tensor is proved to be symmetric in classical continuum mechanics [13,14,15]. As mentioned above, stress tensor is a mathematical quantity rather than a physical quantity, which is introduced to conveniently describe the stress in continua and the resultant surface force acting on material elements whose volume tends to zero. The stress and the divergence of stress tensor are vector fields that can be expressed with stress tensor. Therefore, the symmetry of stress tensor should imply the properties of these two vector fields like that displacement is curl free when rotation tensor, the antisymmetric part of the gradient of displacement, is vanished. According to what I have learnt, the properties of these two vector fields have not been studied yet.
Due to the lack of understanding of the properties of stress and divergence of stress tensor, there are lots of questions cannot be answered clearly in continuum mechanics. For example, the rotation of material element is ignored when its motion is described, but the rotation of a material element is admitted when the deformation of continua is described and the rotation tensor is described as local rigid body rotation of material element [2,13,14,15]. In the theories of elasticity and fluid dynamics, the local rigid body rotation related to rotation tensor in the theory of elasticity and rotation rate tensor in the theory of fluid dynamics does not contribute stress. Then paradoxes are manifested in theory of elasticity and the theory of fluid dynamics that the divergence of stress tensor seems proportional to a spatial derivative of rotation vector in elastic wave equation and a spatial derivative of the vorticity in Navier-Stokes equation [2,13,14]. In particular, the both solenoidal and irrotational part of velocity field, which can be expressed by scalar potential without divergence, contributes to shear stress, but it does not show up in the Navier-Stokes equation. It looks like that shear stress (viscous force) does not always course energy dissipation in viscous flow. In my opinion, only the properties of the stress field are clarified, these questions can be clearly answered.
The paper studies the properties of stress field when stress tensor is a symmetric and the limitation of stress field properties on the divergence of stress tensor. It is shown from this work that the stress field is curl free when stress tensor is symmetric and the divergence of stress tensor can be expressed by the Laplacian of a vector field whose gradient is the stress tensor. The definition of shear wave in elastomer and viscous flow in fluid means that the stress tensor can be asymmetric, the stress field can be a curl field and the motion of material element goes beyond the particle motion.
Property of stress field
Due to the complexity of the relationship between stress field and strain (strain rate) field in continuum, the stress tensor, strain (or strain rate) tensor and rotation (or rotation rate) tensor are introduced in continuum mechanics, and the relationship between stress field and strain field is shown via the relationship between stress tensor and strain tensor [1][2][3][13][14][15]. In order to figure out the properties of the stress and divergence of stress tensor when stress tensor is symmetric, the study first analyzes the relationship between the properties of a vector field and its gradient.
Assuming that A represents an arbitrary vector field, two vectors infinitesimal close in distance satisfy the following relationship under linear expansion: where,▽ is the vector operator del, R is the radius vector. Separating the gradient of A into three tensors: a spherical tensor α, a deviatoric tensor αʹand a rotation tensor χ. They can be expressed with the gradient of A as: ( ) ( ) where, I represents second order unit tensor. When vector A represents displacement field in elasticity, the spherical tensor α, deviatoric tensor αʹ and rotation tensor χ correspondingly describe the volume expansion, shear deformation and rigid body rotation of material element at a point. With Equations (2) to (4), the divergence and curl of A can be expressed as:
( )
: : here, ε is the permutation symbol. It is obtained from Equations (5) and (6) that the divergence and curl of a vector field are included in spherical tensor and rotation tensor, respectively. The vector field A is a curl free field when the gradient of A is a symmetric tensor, and the vector field A is a curl field when the gradient of A is an asymmetric tensor.
By taking the divergence of a gradient, the Laplace operator is obtained. The Laplacian of a vector field is another vector field and is expressed as: By submitting Equations (5) and (6) into Equation (7), the Laplacian of the vector field A can be rewritten as: It is seen from Equations (7) and (8) that a vector field can be divided into two parts: the divergence part and curl part. The two parts can both reduced to the field both solenoidal and irrotational.
It is known that stress tensor is introduced to describe the internal force in continua.
Therefore, the symmetric second-order stress tensor can be expressed by the gradient of a curl free vector field. Assuming that the curl free vector field is Ʃ S called the stress potential here, the symmetric second-order stress tensor symbolled with σ S can be expressed with Ʃ S as: With Equations (1) and (9), the stress symbolled with F can be expressed as: with n the unit vector of outer normal of surface element. It is seen from Equation (10) that the stress F is the directional derivatives of the stress potential Ʃ S . Due to the stress potential Ʃ S is a curl free field, the stress field F is also a curl free field. With Equation (9) and the properties of the stress potential Ʃ S , the divergence of stress tensor can be rewritten as follows: It is obtained from Equation (11) that the divergence of stress tensor σ S equals the gradient of the trace of σ S . Since the trace of σ S is an invariant, the divergence of stress tensor σ S doesn't change with the selection of coordinates.
Reviewing the divergence of stress tensor σ S without considering the properties of stress field, the properties of the traditional expression of the divergence of stress tensor σ S changes with the selection of coordinates and the divergence of stress tensor σ S degenerates into a curl free field in the coordinate system when the coordinate axes are consistent with the eigenvectors of the stress tensor. For example, when plane shear wave propagation in an elastomer, the eigenvectors of the stress tensors at different point are the same. Therefore, no shear wave equation should have been derived because the stress tensor σ S in the coordinate system whose coordinate axes are consistent with the eigenvectors of the stress tensor does not have shear stress components. This means that the traditional understanding of the divergence of stress tensor σ S exceeds the limitation that the stress field is a curl free field.
In the differential equation of motion, the divergence of stress tensor σ S describes the resultant force of the stress acting on the material element. Since the traditional understanding of the divergence of stress tensor σ S exceeds the limitation that the stress field is a curl free field, the differential equation of motion breaks through the assumption that the motion of a material element forming continuum is particle motion.
Elastic wave equation derivation from traditional motion description
In classical continuum mechanics, a continuum is regarded as a set of particles. By treating a material element as particle and only considering its translation, the motion equation of material element in differential form is expressed in the following form [2,3]: where D/Dt is the material derivative, v is the velocity of material element translation, ρ is the mass density, σ S is the symmetric second-order stress tensor, f is the body force which is a curl free field.
With Equation (11), the motion equation of material element forming a continuum can be rewritten as: It is obtained from Equation (13) that the translation of material element is determined by the normal stress. The deviatoric stress does not contribute to the translation of material element, which is different from the traditional understanding.
For elastomer with small deformation, the element translation is expressed in differential form as [13,14]: here, is the time derivative, u is the displacement. The constitutive relation and straindisplacement relation of an isotropic elastomer in component form is expressed as: where, S kl e is strain tensor, ijkl C is elastic tensor, ij is the Kronecker delta. λ and μ are the Lamé constants. When the displacement field is only caused by deformation, the displacement field is a curl free field.
Substituting the strain-displacement relations (Equation (16)) into the constitutive relations (Equation (15)) and subsequently substituting the constitutive relations expressed with displacement into the equation of motion (Equation (14)), the displacement equation of motion without considering the limitation of stress field on the divergence of stress tensor σ S can be expressed in vector notation as [14]: which is also called the Navier equation. In the traditional derivation of elastic wave t equation, the following formula is considered to be true: Then the Navier equation is rewritten as: From the analysis in above section, we know that the stress field and the displacement caused by deformation are curl free fields. For elastomer with small deformation, the stress potential Ʃ S and the displacement u caused by deformation (the strain potential) should meet the following relationship: Therefore, under the traditional motion description, the displacement field should be a curl free field and only longitudinal wave can be derived. Equation (20) breaks the precondition that local rigid body rotation does not contribute to stress and stress field is a curl free field.
Since shear waves exist objectively in elastic media, the motion of material element forming a continuum should be beyond the particle motion description and cannot be described fully by Newton's second law of motion.
Elastic wave equation derivation from new motion description
Though the existence of shear wave is objective in real solid media, it is hard to say According to the properties of displacement field during shear wave propagation, the study believes that the motion equation of material element forming a classical continuum can be expressed as: here, Ʃ is an arbitrary vector field which can be a curl field. In this case, the stress tensor is an asymmetric tensor. The symmetric part and antisymmetric part of stress tensor can be separated respectively as: ( ) ( ) Replacing the stress potential Ʃ with stress tensor, the motion equation of material element is rewritten as: It is seen that under the new motion description of material element shear stress reciprocity is unnecessary in classical continuum mechanics. When stress field is a curl field, the stress tensor is asymmetric.
For elastomer with small deformation, the convective acceleration is ignored and the motion equation can be written as: Under the new motion description, the study believes that the constitutive relation and strain-displacement relationship of isotropic elastomer should be expressed as follows: : here, e describes the strain including traditional defined one and local rigid body rotation.
For isotropic elastomers, the elastic tensor should be expressed as: It is seen from Equations (27) to (29) that the stress tensor is asymmetric when displacement field is rotational and is symmetric when displacement field is curl free. This means that Equations (27) ( ) where, e S and e A are the strain tensor and rotation tensor. Via the relationship between vector field and its gradient (Equations (5) and (6)), Equation (30) is rewritten as: It is seen from Equation (33) that the new wave equation derived from new differential equation of motion can both predict the existence of longitudinal wave and shear wave in elastomer.
In the classical theory of elasticity, though the local rigid body rotation is admitted, the traditionally defined deformation is only considered for the deformation coordination, which means that the material elements composing an elastomer can rotate freely. This is inconsistent with the facts. In admitting the rotation of material element, the study considers that the deformation coordination of an elastomer should be described as follows [16]:
Derivation of Navier-Stokes equation from new motion equation
The derivation of elastic wave equation can only illustrate that the new differential equation of motion is appropriate to explain the existence of traditional defined shear wave.
Whether the differential equation of motion is universal in describing the motion of material element forming continua is questionable. Here I verify the generality of the new differential equation of motion in describing the motion of continua by deriving the Navier-Stokes equation.
For viscous fluids, the stress is related to the volume deformation and shear flow.
When the shear flow occurs in viscous fluid, the velocity field is a curl field. Therefore, the stress field in the viscous fluid is a curl field, and the stress tensor is an asymmetric tensor.
Separating the stress caused by volume deformation from the stress caused by shear flow, the motion equation of material element is expressed as: with ( ) here, d is a deviatoric stress. For a Newtonian fluid, the relation between deviatoric stress and rate of deviatoric strain ξ in component form is expressed as: where, η is the viscosity of fluid. The relation between deviatoric strain rate ξ and velocity v is ( ) Since the following relations are hold: Equation (40) can be rewritten as: Equation (42) is the same as the Navier-Stokes equation of motion for incompressible fluid.
This means that the new motion equation proposed in this study is also suitable for describing the motion of material elements forming fluids.
It is seen from Equation (39) that the viscous force is generated only by the shear flow of fluid (or relative slide of fluid elements) rather than shear deformation of fluid. This means that the viscous force defined by Newton is essentially different from that based on deformation theory. By comparing the viscous force defined by Newton and deformation theory, there are two main differences between them. One is that the former shows that the fluid flow cannot produce viscous force when its velocity field is described by a scalar potential, while the latter shows that the flow may produce viscous force when its velocity field is described by scalar potential. The other is that the former believes that viscous force is related to local rigid body rotation (local vorticity), while the latter believes that viscous force is independent of local rigid body rotation. The difference seems to have been found by Batchelor. He pointed out that there is a paradox in the description of viscous force by deformation theory that viscous force should be independent of the local vorticity [2]. Since the viscous force term in the Navier-Stokes equation is only related to the rotational velocity field, it is proved that Newton's definition is right and the definition based on deformation theory is wrong. This result once again shows that the current deformation theory used in continuum mechanics has serious problems and a new theory is urgently needed to describe the motion and deformation of continuum.
Discussion and conclusions
The study analyzes the properties of stress field when stress tensor is a symmetric and the limitation of stress field properties on differential equation of motion. it is concluded that the symmetry of stress tensor indicates that the stress field in continuum is a curl free field and the longitudinal wave can only be derived from the traditional motion equation.
the traditional understanding on the divergence of stress tensor σ S exceeds the limitation that the stress field is a curl free field. Consequently, the differential equation of motion breaks through the assumption that the motion of a material element forming continuum is a particle motion.
In It should be pointed out that the new motion equation has made the motion of material elements forming classical continua beyond the description of particle motion. At present only two motion models, particle model and rigid body model, are proposed in classical mechanics and the latter is considered to be the collection of the motion of the former. Only from the derivation of the wave equation, the shear wave can be derived from the conservation of momentum moment by adding rotational degrees of freedom to the material element. In this case, the longitudinal and shear waves in elastomer correspond to the translation and rotation of material element, respectively. However, if we believe that the motion of classical continua can be described by the same equation of motion, it is not advisable to regard the translation and rotation of material elements as independent because the coupling between the curl field and curl free field in fluid cannot be reasonably explained. This implies that continuum mechanics needs to be re-established based on a new theory.
Declaration of competing interest
The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-06-23T12:29:08.106Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "9108cf3ffd7e2606ec60c59fe5c91e8d43e36561",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9108cf3ffd7e2606ec60c59fe5c91e8d43e36561",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
55986290 | pes2o/s2orc | v3-fos-license | Political Entrepreneurs, Indeterminate Goods and the Dynamic of Green Markets
This paper sheds new lights on the green consumption phenomenon, and more generally on the determinants of the dynamic of green markets. We argue that some green goods such as carbon labeling products are indeterminate goods (Lupton, 2005). Using Kuran and Sunstein (1999)’s analysis that extends Lupton’s framework, we develop the idea that the dynamic of indeterminate markets depends on a random selection process within which political entrepreneurs may use strategically collective beliefs about product characteristics. Our innovative framework, which is based on the impact of socially constructed information and that of social sanctions, gives challenging insights to study green consumerism.
Introduction
The emergence of green markets is a result of a spontaneous evolution of producers and consumers' behaviours towards environment-friendly choices.Green products are differentiated from standard products by allegations that inform consumers about their environmental characteristics.Today, green markets are expanding in many sectors of the economy in response to a willingness-to-pay premium for goods and services with environmental benefits.According to the categories of goods developed by theorists of economics of information (Nelson, 1970;Darby & Karni, 1973), most environmental attributes such as recyclable or biodegradable features of the product or the environment-friendly nature of production processes belong to the category of credence goods.The quality is unknowable for the consumer before and even after consuming the product.Search information costs are prohibitive, so that the consumer has to believe the information conveyed by the environmental allegation if he wants to evaluate the quality of the product and express his green preferences through the purchase of the product.Nevertheless, green markets may be exposed to consumer confusion due to the multiplicity of information concerning other characteristics of the product and also to the well-known adverse selection phenomenon.Mechanisms such as the signalling of quality through quality labels, minimal quality standards have been implemented in order to restore the efficiency of the transaction and remedy these phenomena.Most importantly, eco-labels have been developed extensively (Note 1).They give information concerning environmental characteristics of the products.In order to generate a change in buyer behaviour, this information must be based on reputation or spread a credible signal.Producers can achieve this goal by certifying environmental characteristics of their products by a third credible party (Note 2).To convert consumers' environmental consciousness into purchase decision, eco-labels must catch consumers' attention that possesses limited cognitive abilities.Therefore, they should be credible and comprehensible (Note 3).In this case, they change credence environmental attributes into search attributes and reduce evaluating and comparing costs of consumers who face an information overload.In this way, eco-labels allow consumers to discriminate between high and low environmental quality products, reinforce their trust towards green products and reduce the adverse selection risk (Note 4).Undoubtedly, there are some empirical evidences showing the interest of consumers towards eco-labeled products.Indeed, the German Blue Angel label has been credited for a reduction in emissions of sulphur dioxide, carbon monoxide by more than 30 percent.In Sweden, the Good Environmental Choice and the Nordic Swan labels have been credited for a considerable reduction in chlorinated compounds, acids, and other pollutants from the Swedish Forest Industry (Thogersen, 2002).In that country, laundry detergents represent 70 percent of the annual consumption of household chemicals.Since the Good Environmental Choice and the Nordic Swan labels were introduced in the late 1980s, Swedish consumers have rejected the most environmentally harmful chemicals.In addition, eco-labeled detergent had a market share of more than 90 percent in Sweden during the year 1997 (Note 5).It is undeniable that the diffusion of green products is widely observed.This paper seeks to deeper our understanding of the phenomenon of green consumption, and we focus particularly on the impact of the socially constructed information on consumers behaviours.We first give an overview of the main contributions of the literature on green consumption.We particularly focus on models of moral motivations (section 2).In the third section, we present Lupton (2005)'s contribution that introduces a new category of goods, namely, indeterminate goods, and that sheds new lights on the dynamic of indeterminate markets.We further show that carbon labelling product may belong to that category (section 3).The fourth section will be devoted to a thorough extension of Lupton's framework.Especially, we deeper the relationship that exists between political entrepreneurs, beliefs formation process and regulations associated with indeterminate markets (section 4).The fifth section applies the Lupton's extended framework to study the dynamic of green markets.We specifically focus on carbon labelling products and develop the idea that markets dynamic depends on a random selection process within which political entrepreneurs play an important role.Our innovative framework, based on the impact of socially constructed information and that of social sanctions, brings new insights to deal with green consumerism (section 5).Section six gives some concluding remarks (section 6).
The Green Consumption Phenomenon: Issues and Debates
The analysis of environment-friendly product consumption poses a problem to economists because the benefits of private consumption have a collective dimension.As the environment is a collective good, consumers will understand that their private consumption will have a very low impact on the improvement of environmental quality because the latter is the result of a collective consumption.Consequently, they would be willing to pay a higher price for green products if one gives them the assurance that other consumers would behave similarly (Note 6).Nevertheless, even if they possess that knowledge, they could rationally demonstrate free-riding behaviours (Andreoni, 1988).In order to explain those consumption patterns where the free-riding phenomenon should make them theoretically improbable, the literature has suggested several explanations.First, green consumerism is coherent with economic models that integrate moral motivations into consumption behaviours.In that kind of models, contributing to the public goods also produces some kind of private benefit to the contributor.Specifically, it assumes that people achieve a "warm glow" by contributing to public goods (Andreoni, 1990;Hollander, 1990).Moral motivation models explain deeply why people would contribute to the public goods.One model developed by Brekke and Rege (2003) assumes, for instance, that individuals have preferences for achieving and maintaining a self image as a socially responsible person.Self image improves when an individual's actual behavior gets closer to his view of the "morally ideal" behavior.The latter is defined as that behavior which, according to the individual's own judgment, would maximize social welfare if chosen by every member of society.Applying this reasoning to green consumption means that the individual increases his self image when he consumes a green product.Let us note, however, that the individual must be previously aware of environmental issues before consuming the product.Moral decision-making has long been studied by social psychologists.One important contribution is the work of Schwartz (1970) who has developed a social-psychological model of altruistic behavior.The problem is to understand the process by which altruistic social norms translate into individual behavior.According to Schwartz, the process begins with social norms regarding moral behavior which people generally agree upon.These norms represent the values and attitudes of significant others; we expect others to act in the morally proper way, and they in turn expect the same of us.The social norms are adopted by each of us on a personal level and thus become personal norms.To violate a personal norm generates guilt, and to uphold a personal norm generates pride.Nevertheless, individuals may internalize the norms and still not act in accordance with them.Schwartz thus identifies two variables that influence whether or not personal norms translate into behavior: the awareness of the consequence that action or inaction will have, and the ascription of responsibility for those consequences.When these two variables are high, personal norms translate into behavior (Note 7).
Second, models of social sanctions may have some explanatory power to green consumerism.For instance, Rege (2004) shows that when contributions to public goods are motivated by the desire for social approval from others, multiple equilibria may result, including one case in which no-one contributes and a second case in which everyone contributes.These outcomes are distinguished by the extent to which social norms for contributing to collective goods are recognized and enforced.Nevertheless, to our knowledge, there are not any applications of these models to the issue of adoption and diffusion of green products.According to Howarth, Nyborg and Brekke (2006), models of social sanctions would be limited to explain green consumerism.Indeed, empirical studies have found evidence that psychological forces such as internalized moral motives, may also be at work.In a survey conducted in Norway, for example, Bruvoll, Halvorsen and Nyborg (2002) found that while 41% of those who engaged in recycling agreed with the statement "I recycle partly because I want others to think of me as a responsible person," as many as 73% agreed that "I recycle partly because I want to think of myself as a responsible person."Howarth et al. (2006) underlines that such models are often based on highly simplified assumptions regarding the psychological considerations that motivate people to undertake other-regarding actions.Also, these models typically produce unique equilibria and neglect the cognitive aspects of a decision motivated by moral considerations.These authors refer to Schwartz's (1970) argument while integrating these cognitive aspects into their analysis.They develop the idea that herd behavior toward green products is observable when consumption is motivated by internalized social norms.In a simplified model where agents can choose between a "green" product and "brown" one, they assume the following assumptions: First, that people get an improved self-image from purchasing "green" rather than "brown" products.This self-image benefit is increasing in the individual's beliefs about the external benefits of choosing green.For instance, the success of the dolphin safe label may be attributed to a video that depicted the dramatic effects on dolphins of traditional tuna harvesting.Second, the self-image benefit is increasing in the personal responsibility the consumer feels for the issue.Perceived responsibility is, in turn, larger the more common it is to choose the green alternative.High adoption rates thus influence consumers' propensity to interpret product adoption as a matter of moral responsibility.One may describe the consumer's behavior as follow: "I know that others consume the green product, thus, I feel responsible if I consume it as well".What is worth mentioning is that their model suggest that advertising campaigns that promote the belief that "green" products provide important environmental benefits could increase the share of green consumers.In the same way, advertising campaign may influence consumers' perceptions of the share of others who purchase green products (Note 8).In this respect, they note that advertising campaign could be consciously exploited by marketers seeking to manipulate consumer's perceptions of the product environmental attribute as well as market share to increase the rents derived from producing "green" products.The strength of their model is to highlight many pro-environmental behaviors that are unobservable such as recycling.Indeed, to some extent, models of social sanctions fail to explain these behaviors.
All of these models have some explanatory power to the phenomenon of green consumption.They are particularly useful to deal with the expansion of green products that have experience and credence characteristic.And yet, in the field of economics of quality, a new category of goods has emerged in the literature, namely, indeterminate goods (Lupton, 2005).At present, our goal is to show that some green goods such carbon labeling product may belong to that category.We then raise the question of the determinants of the diffusion of these goods, and to achieve this end, we develop a challenging framework that integrates cognitive aspects as well as the impacts of social sanctions to deal with imitative behavior.We will thus suggest a complementary analysis to that of Howarth et al. (2006).
Green Goods as Indeterminate Goods
In a challenging contribution, Lupton (2005) focus on economic problems associated with products quality uncertainty.Economic problems linked to this quality uncertainty have been extensively analyzed, notably in major contributions from Arrow (1963) or Akerlof (1970).However, quality uncertainty has essentially been considered as a lack in the consumer's knowledge, of which the omniscient producer takes advantage.Lupton gives a new vision on the nature and status of quality uncertainty.She argues that uncertainty about a product's quality can be shared by all agents in the market (buyers, sellers, etc…), and this uncertainty can disrupt the market.This shared uncertainty finds its true meaning with the complexity of the production goods and services, composed of many different substances and subject to many intermediary agents.In this respect, no agent can really assume he perfectly knows the product.Recognizing the existence of this "shared uncertainty" opens new perspectives to deal with the determinants of the markets dynamic.As far as the product quality is concerned, she is interested in three types of product uncertainties: uncertainty due to the actual making of the product, uncertainty due to the past of the product and uncertainty due to future impacts of the product.We focus here on the third type of uncertainty and present Lupton's argument who deals with a specific aspect of quality, namely product safety.
Much works have been done showing that markets are confronted by difficulties due to controversies over product safety.The literature mainly focuses on problems of asymmetric information, and mechanisms to reduce ignorance and protect consumers (Daughety & Reinganum, 1995;Goldberg, 1974).Some authors deal with the scientific uncertainty regarding the health impacts of food products (Phillips & Isaac, 1998;Henson & Caswell, 1999), but when they analyze quality uncertainty, characteristics are either considered as experience or credence characteristics.However, scientific controversies concerning GMOs, hormone-treated beef or the spreading of sewage sludge show that markets dynamic depends on the product indeterminacy regarding impacts on health and the environment.Potentially dangerous substances are identified, but the scientific knowledge with regards to their effects is shared by producers, consumers and all agents linked to the market.The idea is that some groups of agents may develop different contrary judgments about the product impacts.They may strategically use scientific expertise to reject a product but they can also ban a product because of lack of scientific knowledge (Note 9).To deal with the issue of fragility and the dynamics of these markets, Lupton gives the basis of an innovative framework by introducing a new class of goods that adds to the former classification of experience, search and credence goods.These goods are named "indeterminate goods".The latter is defined as goods or services whose characteristics cannot be known either before purchase, or once they are consumed (experience good), or again through additional costly information (credence goods).In the field of product safety, this indeterminacy refers to the product impact on health and the environment.Indeterminate goods are very different from experience and credence goods regarding the definition of quality and the nature of uncertainty, the cost of acquiring information and the nature of market problems.First of all, the notion that shared uncertainty must be distinguished to that of asymmetric information.In a context of asymmetric information, uncertainty about product quality is defined as consumers' lack of information about the product's characteristics.But the characteristics of the good are clearly defined and known by the producer.Conversely, in a context of shared uncertainty, all the agents linked to the market are ignorant about the indeterminate characteristic of the product.Here, uncertainty about the good's characteristics cannot be controlled.It might be magnified or minimized to justify the different actions and expertise of agents.Different causal interpretations coexist, without any one version proving to be the right and unique answer.It can be defined as a radical uncertainty in the sense or a perceived uncertainty.
In addition, the costs of acquiring further knowledge of quality are very different.For experience and search goods, costs are linked to acquiring existing information that is already available and known to the producer.For indeterminate goods, these costs cannot be evaluated ex ante, as they consist in spending resources to create knowledge in order to bring evidence concerning the future impact of the product.This can be acquired through research investments for instance.Consequently, the cost of acquiring information is much higher for indeterminate goods than for credence goods.
Finally, the dynamics of markets of credence goods are subject to Akerlof's (1970) mechanism, according to which there may be no demand at all, as consumers cannot distinguish between a good and bad quality car, and they anticipate that the average quality will always be inferior to the expected quality.In the end, the market actually does provide more and more poor quality products.But in the case of indeterminate goods, the market can collapse because some influential groups use strategically the shared uncertainty about the product characteristic.These groups seek to influence public opinion on the health and environmental impacts of the products in order to generate standard or some kind of product ban.According to Lupton (2005), consumers adopt Keynesian mimetic behavior.Indeed, as they think the rest of the world is better informed, they will not act according to their own opinion of the quality of the product but according to the average opinion of the market, so that the outcome is a conventional judgment on the quality of the product.The market collapses because of the changing general opinion on the quality of the product, without any change as to the intrinsic quality of the product.The case of the ban of the spreading of sewage sludge illustrates the underlying mechanism of the dynamic of these markets.
By introducing a new class of goods, the Lupton's theoretical innovation is interesting because it opens new perspectives to study green consumption phenomenon and the dynamic of green market in general.Indeed, we have previously noticed that most of green goods belong to the category of experience or credence goods.And yet, green goods such as "carbon" labeling product may be defined as indeterminate goods.Concern about climate change has stimulated interest in estimating the total amount of greenhouse gases (GHGs) emitted during the production, processing, retailing and use of many consumer goods (Smith et al., 2005).The outcome of these calculations is the carbon footprint, which reports the total amount of GHGs produced for a given activity.Since the middle of the two thousand years, carbon labels that signal a summary of the carbon footprint of a product to consumers have emerged significantly.They have been particularly developed in the UK and have targeted primarily food products (Note 10).Although carbon footprints have been used by businesses in order to inform their internal environmental management, carbon labels which inform consumers about the carbon content of a product, allow them to contribute to the GHGs emission reduction through their purchase behavior.We argue that the environmental impact of such carbon labeling products is indeterminate for mainly two reasons.First, research around academic literature has revealed that carbon footprinting generally lacks transparency in its meaning and methods of calculation (Wiedmann, Barrett & Lenzen, 2007).Besides, the study of Gadema and Oglethorpe (2011) conducted in the UK showed that when asked specifically whether they would find advantage in having carbon labels, 81% of respondents strongly agreed that understanding carbon footprint information and comparing carbon footprints were difficult and confusing.Second, although climate policies are widely implemented throughout the world, climate science is a complex science within which there are scientific controversies as to the causal relationship between human GHGs emissions and climate change (Jaeck, 2011).Consequently, if we follow the logic developed by Lupton (2005), the indeterminate characteristic of carbon labeled products, namely their uncertain impact on tackling climate change, could be strategically used by actors linked to the market.On the basis of Lupton' ideas, we seek to analyze the dynamic of these markets; however, we will propose a more refined conceptual framework.Indeed, she neglects the impact of the social construction of information and particularly the mechanism by which collective beliefs about products indeterminate characteristics develop throughout the society.We now turn to some criticism and extension of Lupton's work.This extension will provide the theoretical basis of our framework.
Indeterminate Goods, Beliefs Formation Process and Regulation
The issue of what determine the collapse of indeterminate markets is only partially analyzed in Lupton's work.She argues that consumers adopt mimetic behavior and will not act according to their own opinion of the quality of the product but according to the average opinion of the market.The idea that consumers are rationally ignorant about the product characteristic is present, nevertheless, the link between such incomplete information and the development of collective beliefs about the product quality is absent from the analysis.In this respect, Kuran and Sunstein (1999)'s work that focus on the determinants of risk regulation, take into account that relationship.Indeed, they advocate that activists might influence public decision makers by using biases in risk perception (Note 11).For them, risks regulation related to human health or the environment might be disconnected from real risks and generates a waste of resources.They insist on the fact that new forms of collective actions might strategically use risk perception biases and individual mental shortcuts in order to affect individual judgements about risk issues.The issue of risk perception has been widely analyzed by cognitive psychologists.Most of these works concern laboratory experiments that show problems in treating information, leading individuals to biased risks perception (Note 12).Nevertheless, Kuran and Sunstein (1999) argue that these works are based on laboratory experiments and thus neglect the role of interdependence among individuals.More specifically, such works neglect the role of social dimension of information acquisition.Yet, the key point of Kuran and Sunstein's analysis is to show that the social interactions process generates incorrect collective beliefs which in turn lead to counterproductive public policy (Note 13).In their analysis, Kuran and Sunstein (1999) particularly focus on the interaction between availability heuristic and the social interactions process to explain collective belief formation.They do so by referring to concrete examples, such as the Love Canal story.
The "Love Canal" Story
Between 1942 and 1953, Hooker Chemical Company filled Love Canal, an abandoned waterway that flows into the Niagara River in the State of New York, with more than 21,000 tons of chemical waste.In 1957, the local government developed the area, turning it into a neighborhood.In 1976, the canal overflowed its banks after several years of heavy precipitation.In the same year, a commission responsible for monitoring the Great Lakes detected Mirex Insecticide in Lake Ontario, and the New York Department of Environmental Conservation identified Love Canal as a major source of this insecticide.Immediately, the local press reported the fear that residents had concerning the side effects of the insecticide on human health, thus developing many rumors in the area.Government officials ordered a series of tests that showed low levels of carcinogens in a basement near the dump site.In 1978, the EPA confirmed the findings and declared that the water was safe to drink and that Lake Ontario was not contaminated.In June 1978, however, Marie Gibbs, who was the president of the Love Canal Homeowners Association, played a key role in reinforcing fears of adverse health effects and mobilizing public opinion.This activist appeared on television, at the Capitol building, and organized demonstrations in the area.Responding to the outcry, New York State Health Commissioner Robert Whalen declared a public health emergency in the area in 1978.He urged area residents to stay out of their basements and avoid eating anything from their gardens.He also organized the temporary relocation of twenty-five pregnant women.In September 1978, he published a report entitled Love Canal: Public Health Time Bomb which described Love Canal as a disaster area.At the same time, Governor Hugh Carey who was at the height of his re-election campaign followed Whalen's vision.He promised state aid for the locality and granted the relocation of 239 families living closest to the canal, all at the state's expense.From the standpoint of the federal government, President Carter declared an emergency in the area.Few days after his declaration, scientists re-examined the facts and showed that the health risks had been overestimated.Nevertheless, this event was covered by the media, in particular the New York Times.In May, President Carter ordered the relocation of 700 additional families which cost three million dollars.
The same year, Congress established the Superfund program (Note 14).
Social Construction of Information and Irrational Public Policies
For Kuran and Sunstein (1999), the Superfund program was the result of the highly publicized Love Canal story.Also, opinion polls had revealed that 80% of U.S. citizens supported state intervention to treat this environmental problem (Note 15).As the authors underlined it, the emergence of such collective fear about risks which do not reach a scientific consensus is a phenomenon in accordance with an essential result of cognitive psychology, namely, the limited rationality of individuals (Note 16).Due to limited cognitive abilities, individuals rely on heuristics in order to develop their own judgment about risks.This attitude might lead to erroneous judgments.Kuran and Sunstein (1999) insist on the fact that media coverage and different meetings prove that interdependence between individuals explains the emergence of such collective fears.The latter are the result of the influence of new forms of collective actions.This argument is based on two hypotheses concerning individual behaviour.
First, individual's cognitive abilities are limited.Consequently, in order to form their personal risk judgment, they rely on availability heuristics.Second, individuals are looking for social approval.Their personal judgments about risks are related to rational behaviour seeking to maintain their reputation.In both cases, their behaviour is in accordance with the rational choice model.More precisely, Kuran and Sunstein (1999) have argued that collective beliefs formation about risks is the result of a circular process between risks judgment expressed publicly and the formation of personal risk judgment.Thus, information about risk is socially constructed and influences the collective beliefs formation.This process originates from the behaviour of activists that might also be named political entrepreneur, who launch the formation of such beliefs.The latter develop throughout two mechanisms that are in interaction.First, they may develop due to an informational cascades mechanism.Second, they form throughout a reputational cascades mechanism (Note 17).
Mechanism of Collective Beliefs Formation Process
Several theories have been suggested to explain uniformity in social behavior.For instance, Akerlof (1980) and Kuran (1989) showed that reputational effects and social sanctions may generate uniform behavior among the public.Jones (1984) simply claimed that the main motivation for individuals conforming to their peers is imitation.Such theories explain how constraints and circumstances affect uniformity and rigidity of social behaviors.Nevertheless, theoreticians of information cascades such as Bikchandani Hirshleifer and Welch (1992) or Hirshleifer (1995), argued that it does not explain why uniform behavior would lead to errors and why mass behavior is fragile and subject to evolution.The basic idea is that an informational cascade emerges when it is optimal for an individual to rely on the observation of the behaviour of others and neglect his private information when he has to decide an action.The singularity of this theory is that these imitation phenomenon are based on the rational behaviour of individuals in a context of incomplete information.Looking at the Welsh (1992)'s example about a choice of restaurant, Bonardi and Keim (2005) describes the information cascades mechanism as follow.The first step is that a set of individuals makes a similar decision in sequence-for instance, whether or not to eat at a specific restaurant along a busy tourist avenue.None of the individuals knows for certain what benefits he will derive from such a decision.Each has a probability of liking the restaurant choice, based on information he receives prior to the decision.When the first customer makes a decision, his only source of information is the prior information received.After the first actor chooses, however, there is additional information available to those who have yet to choose-the sight of the first actor going into or eating at the restaurant.The second individual observes what the first one did, which impacts her probability of choosing, and then he makes her decision.If the signal he gets from the prior individual's actions offsets the information from the signal prior to the game, then he flips a coin to make the decision.At the time the third individual makes a decision, his own private signal may be outweighed by the information he receives from observing the first two choosers, and he may decide to go to the restaurant, regardless of her original signal.A fortiori, the next individuals will all do the same, and an up information cascade has been generated.
Informational cascades theory diverges from other theories of conformity of behaviour by focusing on the fact that mass behaviour develops rapidly and is fragile.In the sequence, the credibility of the first and second individual is crucial to affect the choice of the third one.Indeed, social psychologists report that people imitate the actions of those who seem to have expertise in a field.According to Bikchandani et al. (1998), this explains the success of product endorsements in which athletes are seen to use a particular brand of athletic shoes.To start a cascade, the first individual must be an expert or a "fashion leader".Nevertheless, cascades are fragile; several kinds of shock could reverse a cascade such as the release of new public information or the arrival of a better informed individual such as an expert.Also, the main problem with such informational cascades is that they may be erroneous.Indeed, even if the information precision of the first individual is high, it is not obviously better than the combination of different private information of later individuals.
According to Kuran and Sunstein (1999), the information cascade mechanism explains the formation of individual judgments on risk issues as well as erroneous collective beliefs.Indeed, the "Love Canal" case is in keeping with the predictions of information cascades model.On the one hand, it is costly to have access to information on risk issues because such scientific information is complex and belong to the scientific community.Consequently, the quality of the private information possessed by individuals is very low.Individuals thus use the availability heuristic and develop their own judgments by observing the judgments of others.On the other hand, political entrepreneur such as Marie Gibbs contribute to start information cascades through their public discourse in the media (Note 18).They are perceived by the public as leaders with a high quality of private information on risk issues.Also, the alarmist publication of Mr. Whalen, the aforementioned health commissioner, has contributed to reinforce the cascade.Finally, it seems that the collective beliefs were erroneous if one considers the persistent doubt among the scientific community.Such a result is in line with the predictions of information cascades theory.Indeed, it predicts that the probability of convergence towards erroneous behavior is high.This result holds especially in the case where the private information is non observable for individuals, which prevents the information aggregation process and the correction of errors.
Nevertheless, for Kuran and Sunstein (1999), the collective beliefs formation also results from a reputation cascades mechanism.The latter has its roots in Kuran (1990)'s work about the preference falsification process.Revealed preference theory suggests that preferences of individuals may be deduced from the observation of their action.Nevertheless, Kuran (1990) argued an individual who joins a riot against a government does not necessarily support a change in the political regime.It might be costly for him if he does not participate.According to Kuran (1990), there are contexts where individuals can be punished or rewarded according to the preferences they express through their actions.In public choice analysis, this theory may lead to misinterpretations.In his article, Kuran (1990) develops a framework which seeks to study individual choices including a study of a wide range of motivations.More precisely, individual utility has three dimensions: the social choice, social sanctions associated with individual choice, and the autonomy of individual decision.These three conflicting factors of utility generate, for each individual, a private preference and a public preference, the latter being the one he reveals to others through his actions.The divergence between these two types of preferences is at the heart of his preference falsification theory.According to Kuran (1990), his theory provides a better understanding of individual behaviour than traditional theory because an individual derives satisfaction from different conflicting sources.This framework explains several social and economic choices where reputation is a part of utility (Note 19), and combining with the existence of informational cascades, it gives a thorough understanding of how erroneous collective beliefs develop and spread in the society.
The Interaction between Informational and Reputation Cascades
According to Kuran and Sunstein (1999), the preference falsification phenomenon contributes to the emergence of irrational policies to manage risks.It has been discussed that an erroneous collective belief may develop due to the information cascades process.Such erroneous collective belief may in turn lead the opportunistic politician who wants to maintain his popularity, to falsify his preference and implement policies disconnected from real risks.If one considers the Love Canal story, the attitude of Governor Hugh Carey as well as that of President Carter might be related to preference falsification since the scientific community had demonstrated that risks were overestimated.Such behaviors of policymakers contribute to reinforce the erroneous collective belief among the public and prevent it from being reversed.As soon as the collective belief has reached a critical size, it affects the reputation benefits and costs of individuals who have not yet adopted the beliefs, and incites them to falsify their preferences (Note 20).The collective belief may therefore expand through a reputation cascade mechanism (Note 21).This phenomenon leads to a falsification of knowledge, which, combined with the loss of knowledge generated by the information cascade, reinforces the probability of expansion of an erroneous collective belief (Note 22).Consequently, the collective beliefs formation on risk issues is the result of an interactive circular process between judgments that are expressed publicly and the formation of private judgments.Kuran and Sunstein (1999) thus highlight the relationship between the preference falsification process, the social construction of information, and the irrationality of public policies to manage risk issues.
Several author have referred to Kuran and Sunstein's analysis. Bonardi (2005) improves notably the relationships between political entrepreneur/activist, experts and the media that enhance the emergence and the strength of collective beliefs (Note 23).Indeed, the development of reputation cascades among the expert community is a necessary condition to avoid the reversal of information cascades initiated by political entrepreneurs (Note 24).Notably, he cites the case of the GMO importing ban within the EU that has been largely the result of the influence of groups such as ATTAC or Greenpeace in the media.
In our view, Kuran and Sunstein's analysis which describes the beliefs formation process and their effect on risk regulation is a considerable improvement of Lupton's work about the dynamic of indeterminate markets.Indeed, they explicitly take into account the link between political entrepreneurs, the emergence of collective beliefs and regulations.We now refer to this framework to study the dynamic of green goods that belong to the class of indeterminate products.We develop especially the idea that the dynamic of these markets resembles a random selection phenomenon.
Random Selection and the Dynamic of Green Market
As it has been mentioned previously, carbon labeling products may be defined as indeterminate goods.In that section, we argue that the process by which such green goods expand on or disappear from the market is a process that we call random selection.This process depends intrinsically on the beliefs formation process through cascades effects.We develop this argument by using the logic of Hirshleifer's model (Hirshleifer, 1995).The author is interested in the decision of smoking.We suggest a similar analysis by studying the consumption choice between "a carbon labelling product" and "standard product".The assumptions of the model are as follows: One considers a sequence of individuals who may decide to buy the green product or not to buy it.Each individual knows his position in the sequence.Each individual observes the actions of his predecessors, namely whether they have consumed the green product or not, and not their private signals.Indeed, each individual has a private information on the indeterminate characteristic of the product, that is, the ability of the product to contribute to the GHGs emission reduction (Note 25).Each individual will develop a belief concerning the product's indeterminate characteristic on the basis of its private information and the observation of the behavior of others individuals.One may present the logic of the model as follow: The first individual in the sequence observes a positive signal about the good indeterminate's characteristic and decides to consume product A (which is a carbon labelled product).The second individual does not know the signal perceived by the first individual but he deduces he has perceived a positive signal from the observation of his action.He decides to consume product A if his private signal is positive.If he perceives a negative signal (namely that the environmental impact of the product is low), he flips a coin to decide to consume product A. As soon as the third individual decides, the informational cascade starts.If one assumes the case where the two predecessors have consumed product A, the third will consume the product too.Indeed, the model supposes the third individual knows the signal perceived by the first individual.So he deduces from the observation of the second individual that he has perceived a positive signal.He will thus neglect his private information and will develop the belief that the environmental impact of the product is very high, and therefore he consumes product A. All the followers behave like the third one, and we observe the diffusion of the product A. In this case, an "up cascade" develops.By analogy, if the first two individuals do not consume the product, a "down cascade" will emerge and we observe the diffusion of the standard product.This reasoning calls for several remarks.
First, the action of the second individual depends on the credibility of the first one.If the first individual is an environmental expert for example, the second will attach to his behaviour much more credibility even if he does not know his private signal (Note 26).In the same way, the status or profession of the first individual reinforces the formation of the belief of the third individual even if he knows his private information.Thus, one of the strong assumptions of informational cascades theory is that individuals, which are rationally ignorant, neglect their private information and develop their belief on the ground of the observation of the behaviour of only two individuals.We believe this assumption remains realistic due to the current context characterized by complex and overloaded information concerning climate change issues.
Second, the transformation of that belief into green purchase behaviours is not certain because individuals may not buy the product due to the free ridding problem.Consequently, if we look at the particular case of the expansion of the green good, the analysis must be deepened at several levels.First of all, the first individuals in the information cascade have an incentive to free ride even if they believe that the good's environmental impact is real and effective.As the good's characteristic generates collective benefits when the good is consumed individually, it is necessary to assume that the information cascade will develop first among individuals who have strong ecological preferences when they learn about the good's environmental impact, no matter what the objective quality of the information cascade is.One may assume for instance that these first individuals in the sequence achieve "a warm glow" by consuming the product and therefore contributing to the public good.Second, in order to explain by which mechanism the green consumption might expand throughout the rest of the population, one need to take into account the effect of a critical size of green consumers on the incentives of others individuals to consume the green product.Indeed, if we follow Kuran (1991)'s reasoning who introduces the impact of social sanctions into the analysis of human behavior, a critical size of individual should exist in the society to affect reputation costs and benefits associated with a particular behavior or opinion.In our example, a critical size is achieved when early consumers with "warm glow" green preferences start the information cascade.Then, the more the green consumption phenomenon increases, the more individual reputation benefits of green consumption as well as individual reputation costs of the standard consumption increases.In that process, later individuals in the sequence are only sensible to sanctions and rewards associated with the different types of consumption (Note 27).However, we argue that achieving a critical size is not a sufficient condition to make social sanctions effective.Collective beliefs associated with the risk of climate change if the GHGs emissions are not tackled might play an important role.Therefore, reputation benefits and costs of later individuals in the sequence are logically affected by the consumption behavior of their peers, but also by the nature of public opinion about climate risk and how to handle it.
So far, we have provided a theoretical framework that allows us to describe the expansion of green goods with indeterminate characteristics.If we now turn to the analysis of the dynamic of such markets, namely to what extent green goods may expand on or disappear from the market, it is necessary to take into account of the fragility of information cascades.Indeed, information cascades theoreticians have shown that since the individual neglects his private information, cascades related to the environmental impacts of carbon labeling products, or those associated with climate change risk may be easily reversed if a new "fashion leader" or expert joins the sequence.If one takes into account these effects, the dynamic of indeterminate goods market depends on random selection (figure 1).This phenomenon which is implicitly described in Lupton's work seems better explicit taking into account the logic of beliefs formation trough cascades effects.Besides, such logic refers explicitly to the role played by political entrepreneurs who seek to initiate information cascades in accordance to the ideas they support.In this respect, we note that the dynamic of markets trough a random selection process allow us to introduce "traditional" entrepreneur into the analysis.Indeed, the latter may engage themselves into political entrepreneurship strategies to reach their economic goals.During the competition process between carbon labeling producers and standard producers, using strategically the information associated with environmental impacts of the product could be a new weapon to compete and win market shares (Note 28).
Concluding Remarks
This paper has shed new lights on the green consumption phenomenon, and more generally on the determinants of the dynamic of green markets.The literature has principally analyzed such dynamic through adverse selection in a context of information asymmetry.We have argued that some green goods, notably, carbon labeling products, belong to the category of indeterminate goods.The dynamic of such markets is thus better analyzed in a context of shared uncertainty, and it depends on the impact of collective beliefs related to the products' environmental impact on consumption decisions.This argument, which originally comes from Lupton's contribution, has been considerably deepened.Particularly, we have demonstrated that Kuran and Sunstein's (1999) analysis related to the beliefs formation process and its effect on regulation, is complementary to that of Lupton.Using Kuran and Sunstein (1999)'s framework and extending it to the issue of the dynamic of indeterminate good markets, we have developed the idea that markets dynamic depends on a random selection process within which political entrepreneurs initiate information cascades.Our innovative framework, based on the impact of socially constructed information and that of social sanctions, allows us to bring new insights to study green consumption.In terms of carbon labeling, according to a recent study for the EU Directorate-General for the environment 72 per cent of EU citizens support carbon labeling and think that it should be mandatory in the future (The Gallup Organization 2009).Nevertheless, according to the study of Upham, Dendler and Bleda (2011) on the perception of carbon labels by UK consumers, only a very small percentage of consumers can be expected to make use of carbon labels.Such empirical evidence may be partially explained by our approach.Indeed, the critical size of green consumers would not have been reached yet to enhance a general adoption of carbon labeling product.Looking at the factors that impact the formation of such critical size is an interesting avenue for further research.Besides, our approach is complementary to that of Howarth et al. (2006) It is clear that models of social sanctions are appropriate to study the adoption of green products for which the consumption is observable.On the contrary, Howarth et al.'s analysis is better adapted to study green consumption that is not observable.The heart of their analysis is to show that marketers may use strategically consumers' perception of the share of others who purchase green products as well as consumers' perception of the product environmental attribute.In the end, consumers interpret green products adoption as a matter of moral responsibility and are incentivized to consume the green products.And yet, we believe that the logic of the random selection process is suitable to study green consumption behaviours' that are not observable.The intrinsic mechanism underlying green consumption would not depend any more on social interdependencies between different consumers' moral motivation, but rather on the impact of collective beliefs that come from information and reputation cascade effects.In this respect, several formal extensions of Howart et al.'s analysis that take into account theses cascade effects are possible, and constitute promising research directions to deal with the diffusion of green products that content indeterminate characteristics such as carbon products or biofuels for instance.
Notes
Note 1.There is a wide range of eco-labelling programs, their private or public character distinguishes them.They could be voluntary or mandatory, and they could refer to a variety of environmental attribute or a specific one.US Environmental Protection Agency (USEPA, 1993(USEPA, , 1994) ) has proposed a classification of environmental labels.Due to the increase in number of the environmental allegations sometimes without sound basis at the beginning of the eighties, ISO (International Organization for Standardization) has published norms in order to create a frame of reference in that field.http://www.iso.org/.The type 1 refers to eco-labelling programs certified by a third party.They are based on life cycle assessments, which differentiate products, by their environmental attribute.The type 2 refers to environmental claims without certification by a third party.The type 3 refers to environmental data of the product, certified by a third party.Public eco-labelling programs such as the German blue angel, the European eco-label, the Nordic Swan and the French eco-label NF environment belong to the type 1.
Note 3.For a discussion concerning the consumer attention in a context of overload information, see Davenport and Beck (2001).Note 4.Many studies have shown the ability of environmental labels certified by a third party to create consumers trust.The reader can refer to Enger and Lavik (1995).
Note 5.One can also find empirical evidences in the work of Bjorner, Hansen and Clifford (2004), OECD (1997), Teisl, Roe and Hicks (2002), Roe, Teisl, Levy and Russel (2001), Moon, Florkowski, Bruckner and Schonholf (2002).Note 6. Schmidtz (1991) and Landesman (1995) have highlighted this problem.Note 7. Hopper and Nielsen (1991) have applied the Schwartz model to recycling behavior.They conduct experimental and survey data from residents of a large urban neighborhood with a communitywide curbside recycling program in order to determine the extent to which recycling could be conceptualized as altruistic behavior.Results confirmed that recycling behavior is consistent with Schwartz's altruism model, according to which behavior is influenced by social norms, personal norms, and awareness of consequences.Note 8. Schultz (2002) quotes several studies indicating that recycling behavior is correlated with respondents' beliefs about the frequency of recycling in their community.Note 9.The hormones dispute between the US and the EU over the hormone-treated beef trade illustrates this.In 1989, the European Commission banned the use of five hormones in meat production, claiming that hormone-treated beef is unsafe for humans.The prohibition applies to both imported and domestically produced beef.The US and Canada brought this hormone case before the WTO in February 1998 and requested a dispute settlement panel to review the ban.They questioned the scientific basis of the ban and also noted that the European measure provided a level of protection that was arbitrary and unjustified, and could therefore be considered as a disguised restriction on international trade.The EU rejected these claims.First, it argued that scientific evidence on the long-term health effects of ingesting hormone-treated beef was insufficient or non-existent.Therefore, the EU asserted that it was within its rights to exercise the precautionary principle, according to which, measures can be taken to maintain a high level of protection where scientific evidence is incomplete or unconvincing.After this defense, the US and Canada Panel Reports concluded that the EU had violated the SPS Agreement.Finally, the WTO concluded that the EU had not given sufficient scientific evidence, but the EU kept its ban, while recognizing the need to seek further information and review measures in the light of future evidence.Lupton also refer to the striking case of the ban of the spreading of sewage sludge.In France, the food industry (Panzani), mass marketing (Carrefour, Auchan) followed by transformers and cooperatives, started banning the spreading of sewage sludge on agricultural land from 1997 onwards, despite French legislation legitimizing this practice.These groups did not justify their refusal on scientific grounds, but based their ban on the persistent uncertainty on the health and environmental impacts of sewage sludge on lands and crops.In certain areas, this led to a massive refusal of sewage sludge by farmers, and a collapse or contraction of the spreading market.Note 10.As Gadema and Oglethorpe (2011) have underlined, attempts to account for the interdisciplinary and overlapping nature of climate change and its inextricable linkages to food systems appears to be gaining credence as the agency, corporate and government response increasingly feature stakeholder involvement in policy formation.
Note 11.Brady, Klark and Davis (1995) have developed a similar argument by integrating cognitive dissonance theory into public choice analysis of competition among pressure groups.Cognitive dissonance phenomenon may be used by activist groups in order to generate collective fears.Contrary to Kuran and Sunstein (1999), these authors do not analyze collective belief formation processes and their effects on public policy.Note 12.One can refer to Kahneman and Tversky (1974).For an overview of different bias and heuristics in risk perception, one can refer to Sunstein (1998).Indeed, Noll and Krier (1990) discussed the contribution of cognitive psychologists on risk perception and analyzed their impact on irrational public policy (Viscusi & Zeckhauser, 1990).Note 13.Viscusi and Hamilton (1999) have also analyzed the issue of irrational risks policy (notably the superfund program established in 1980) and have shown the impact of risks perception bias and political factors on public policies.These authors have shown that these policies have reflected judgment errors because public decision makers were subject to citizen pressure who held erroneous judgments about risks.Note 14. Superfund is the federal government's program to clean up the nation's uncontrolled hazardous waste sites Note 15.A study of Allen (1987) comparing the ranking of environmental problems by U.S. citizens and EPA experts, showed different evaluation by these two actors.The more important risk for the public is that of toxic waste sites whereas EPA experts do not consider it dangerous.For Viscusi (1998), this erroneous perception of risks is attributable to the wide media coverage related to the "Love Canal" story.Note 16.Simon (1982) was the first scholar who developed the concept of limited rationality.Note 17. Kuran and Sunstein (1999) extend the analysis of Slovic (2000) related to the social amplification of risks.Although Slovic is aware of the importance of interdependence of individuals in risk perception, he does not explain social mechanisms leading to such biased perception.For Kuran and Sunstein (1999), it is reputational and informational cascades mechanisms throughout.Note 18.As far as the political entrepreneur is concerned, we refer to the definition of Simmons et al., (2011) who develop a model of political entrepreneurship.Combining the Sutter (2002)'s definition who argues that entrepreneur " discover innovative ways to coordinate individual action for successful collective action" as well as the Boettke (1993)' s definition which states that the entrepreneur is a human actor " who possesses the propensity to pursue goals effectively, once ends and means are clearly identified, but also possesses the alertness to identify which ends are to be sough and what means are available", they define political entrepreneurship as " alertness to unnoticed opportunities to achieve desire political outcomes".Therefore, in the Kuran and Sunstein's story, activists may be defined as political entrepreneur.
Note 19.It may explain, for example, consumption behaviour in order to increase individuals' statute.
Note 20.This argument has been developed by Kuran (1991) to explain the emergence of revolutions and it focus particularly on the impact of internal and external costs associated with the opposition to the incumbent regime.Note 21.Hung and Plott (2001) have demonstrated the existence reputation cascades through laboratory experiments.
Note 22.Such an idea seems validated by Morris (2001) who has shown that reputation effects lead to a loss of information for which the social value is high.Note 23.Bonardi (2005) focus particularly on the impact of collective beliefs on firm's reaction.He suggests that firms may themselves adopt these strategies of political entrepreneurship to fight against collective beliefs when they endanger their profits or market shares.
Note 24.Inspired by the work of Kuran and Sunstein (1999), Lemennicier (2002) analyzes the interaction between informational and reputational mechanism, and their effects on risk regulation.He formalizes reputational cascades phenomenon among expert community and gives an explanation of the tobacco advertising ban.Also, Bonardi (2005) cites page (1999) who argues that the antitrust decisions against Microsoft cannot be explained by the lobbying activities of its competitors such as Netscape but, instead, by a "general opinion" conveyed by experts in the economic field.The process of reputation and information cascades helps explain this result.Jaeck (2011) also applies Kuran and Sunstein's framework to explain the differences of climate change regulations between the US and the EU.Also, Jaeck and Bougi (2010) has developed a theoretical model of political business cycles based on the assumption of cascades effects to deal with regulation cycles in the field of environmental policy.Note 25.Such information related to the indeterminate characteristic is not transmitted through the label, the latter referring only to the good's credence characteristic, notably throughout the carbon printing.On the contrary, this information is related the product's environmental impact, namely its ability to contribute to the GHGs emission reduction.This information may emanate from different external sources such as green public campaign from the good's corporation, climate change campaign from the government, environmentalist expert discourse in the media etc...Note 26.Let us note that Welsh and Kuhling (2009) have provided empirical evidences to the effect of reference groups on pro-environmental consumption.Indeed, they argue reference groups are important when the mode of consumer choice is imitation and social comparison.In this respect, cascades theory offers a challenging explanation of the sources of such imitative behaviours.Note 27.Hung and Plot (2001) provided evidences of such reputation cascades through laboratory experiments.
Note 28.In the field of carbon labeling market, identifying empirical evidences of such corporate political strategies is interesting avenue for further research.
Figure
Figure 1.The random selection process | 2018-12-05T05:52:41.287Z | 2012-12-31T00:00:00.000 | {
"year": 2012,
"sha1": "280d55658d6fd1781d7b0d032d6fd0a4d4b6bbde",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ibr/article/download/23516/15012",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "280d55658d6fd1781d7b0d032d6fd0a4d4b6bbde",
"s2fieldsofstudy": [
"Political Science",
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
14165218 | pes2o/s2orc | v3-fos-license | International Journal of Surgery Case Reports a Case Report of Desmoid Tumour—a Forgotten Aspect of Fap?
INTRODUCTION: Desmoid tumours are locally aggressive tumours which are common in Familial Adeno-matous Polyposis (FAP). PRESENTATION OF CASE: A 20-year old Familial Adenomatous Polyposis (FAP) patient presented with abdominal pain and distention. Abdominal imaging showed small bowel obstruction and hydronephrosis due to a pelvic mass. This mass showed significant enlargement on repeat imaging, and a diagnostic biopsy confirmed desmoid tumour. The mass was deemed unresectable and he was initially started on sulindac and raloxifene. Repeat imaging however showed further enlargement of the tumour, and therefore vinblastine + methotrexate chemotherapy was commenced, with a good response. DISCUSSION: FAP is an autosomal dominant condition caused by a germline mutation in the ade-nomatous polyposis coli (APC) gene. Gardner's syndrome is also caused by a mutation in the APC gene, and is now considered a different phenotypic presentation of FAP. Desmoid tumours are initially kept under observation while their size remains stable. Treatment options for enlarging desmoids tumours include surgery (first-line), radiotherapy, and systemic therapy with non-cytotoxic and cytotoxic therapy. CONCLUSION: FAP patients should be examined regularly post-panprocotocolectomy, since desmoid tumours may arise. The presence of epidermal cysts in this FAP patient suggests a diagnosis of Gardner's syndrome.
Introduction
Desmoid tumours are the second commonest tumour in Familial Adenomatous Polyposis (FAP) after colonic adenomas. Although they do not metastasise, they are often locally aggressive, and usually present with symptoms due to compression of adjacent structures, such as bowel or ureter. The authors describe the case of desmoid tumour in a Familial Adenomatous Polyposis (FAP) patient who presented with abdominal pain and distention.
Patient information
A 20-year old Maltese gentleman presented to the Emergency Department with a 2-day history of abdominal pain. The abdominal pain was severe, particularly over the right flank radiating to the back, and was associated with nausea and vomiting. He also complained of worsening abdominal distention, fatigue and weight loss.
The patient had a history of Familial Adenomatous Polyposis (FAP), diagnosed at age 7. At the age of 18 he had a laparoscopic restorative panproctocolectomy with ileal pouch-anal anastomosis. In late adolescence, he had two skin lumps removed from his feet, with histology showing pilomatrixoma-type features arising in a background of epidermal cyst, raising the possibility of Gardner's syndrome.
Clinical findings
On examination, the patient appeared dehydrated and mildly hypotensive, with a blood pressure of 105/60 mmHg. His parameters were otherwise stable and he was afebrile. The abdomen was distended, and there was mild generalized abdominal tenderness with no rigidity or guarding. Rectal examination was within normal limits. Examination of the lower limbs revealed bilateral mild oedema, with no erythema or calf tenderness.
Diagnostic assessment
Initial investigations revealed a normal blood count and an elevated urea level. Computed Tomography (CT) showed marked small bowel distention with obstruction due to a mass in the region of the ileal pouch. The mass extended into the pre-sacral space (Fig. 1) and was associated with an enlarged iliac lymph node complex. The large mass was obstructing the lower right ureter with resultant hydroureter and hydronephrosis (Fig. 2). A right-sided ureteric stent was inserted. Pouchoscopy was attempted but the pouch was inaccessible due to a very narrow lumen. Biopsies were taken from abnormal mucosa below the pouch, and histology showed chronic colitis and an adenomatous polyp with low grade dysplasia. The patient was discharged home with close follow-up.
Pouchoscopy was repeated two months later and biopsies were taken from an erythematous area, with histology showing mucosal lymphoid hyperplasia with no evidence of malignancy. US-guided fine needle aspiration (FNA) of the mesenteric lymph nodes showed no malignant cells. Repeat CT five months after presentation showed a significant enlargement of the right pelvic mass (Fig. 3). The abdominal lymphadenopathy had however decreased in size. Significant small bowel dilatation persisted (Fig. 4).
The case was discussed with the Multi-Disciplinary Team who deemed the mass to be a desmoid tumour due to the history of FAP. The enlarging desmoid tumour was causing chronic incomplete small bowel obstruction and hydronephrosis. In view of the pilomatrixoma-type features of the epidermal cysts, the patient was likely to have Gardner's syndrome.
Therapeutic intervention
The patient was then transferred to St. Mark's Hospital in Middlesex, United Kingdom for expert management. He was commenced on parenteral nutrition and underwent an explorative laparotomy whereby the tumour was deemed unresectable. A loop ileostomy was formed just proximal to the ileoanal pouch; a diagnostic biopsy taken from the presacral mass confirmed desmoid tumour. Combination non-cytotoxic anti-desmoid medication-sulindac and raloxifene-was started in view of his age and in an attempt to obtain higher response rates than achievable with monotherapy.
Follow-up and outcomes
Repeat CT 12 months after presentation showed that the pelvic mass had more than doubled in size, then measuring 22 × 20 × 16 cm (Fig. 5). He was referred to Oncology at 16 months post-presentation and given the urgency posed by the rapid disease progression, cytotoxic therapy was indicated. Echocardiography showed a LVEF of 55%, and he was therefore started on weekly vinblastine + methotrexate chemotherapy in preference to an anthracycline-based regimen. After a two-month period, Magnetic Resonance Imaging (MRI) showed a slight reduction in the intra-abdominal mass. At four months, tumour size was stable, while it shrunken to 15 × 16 × 16 cm by 3 months after completion of the 1 year course of chemotherapy.
Discussion
Desmoid tumours are rare fibromatous lesions that are nonmetastasising but locally aggressive and have a high rate of recurrence even after complete resection [1,2] These slow-growing tumours commonly arise in the abdominal wall or in the intraabdominal region.
2% of cases of desmoid tumours are associated with FAP [1]. Whereas the incidence of desmoid tumours in the general population is 2-4 per million per year, the incidence in FAP is 10-20%, meaning that a patient with FAP is at an 852-fold increased risk of developing a desmoid tumours [2,3,6]. While sporadic desmoid tumours show a flight female preponderance, there is an equal incidence in males and females in FAP patients [8]. Familial aggregation is seen in FAP demois tumours, but not in sporadic cases [7].
FAP is an autosomal dominant condition caused by a germline mutation in the adenomatous polyposis coli (APC) gene, on chromosome 5q21 [4]. FAP is characterized by hundreds of adenomatous colorectal polyps, with an almost inevitable progression to colorectal cancer at an average age of 35-40 years [4]. Biallelic mutations of the APC gene induces desmoid tumour formation, hence the association between these two disorders [2]. Surgical trauma has been identified as a predisposing factor for desmoid tumour formation in FAP, with one key study from 1994 noting a history of previous abdominal surgery in 68% of FAP patients with abdominal desmoids, with 55% occurring within 5 years postoperatively [7].
Gardner's syndrome is also caused by a mutation in the APC gene. It includes extracolonic manifestations, both benign, such as osteomas, skin cysts, congenital hypertrophy of the retinal pigmented epithelium and desmoid tumours and malignant, such as duodenal, thyroid, pancreatic, liver and central nervous system cancers. Previously, Gardner's syndrome used to be considered as a separate entity, but it is now considered a different phenotypic presentation of FAP [4].
Intra-abdominal desmoids are usually asymptomatic. They become symptomatic when they compress or have infiltrated surrounding viscera. This may result in intestinal obstruction, ischemic bowel secondary to vascular compression, and hydronephrosis due to ureteric compression. Desmoid tumours can also rarely lead to bowel perforation as well as deep vein thrombosis, pyrexia of unknown origin, gastrointestinal bleeding and intra-abdominal abscess formation [2].
Patients with FAP should be closely followed up regularly with abdominal examinations in order to look for any signs of tumours and obstruction [1]. CT and MRI are used to identify these tumours (and monitoring) [2] but a biopsy is necessary to confirm the diagnosis [1]. They are initially kept under observation while their size remains sTable Surgical excision with a safety margin is the firstline treatment for enlarging desmoid tumours.
Desmoid tumours are star-shaped tumours with infilitrative growth, so complete excision entails a large resection. In the past, radical surgical resection with the goal of achieving complete excision with negative margins was standard, as for sarcomas. However whereas positive margins are a predictor of local failure in the case of sarcomas, this was not shown consistently for desmoid tumours, as indolent desmoid tumours will not recur regardless of margin positivity. Therefore radical surgery for all demoid tumours may lead to unnecessary morbidity. Studies have shown that patients with poor prognosis desmoid tumours would benefit most from clear surgical resection margins, and the extent of surgery now depends on the biology of tumour, with function-sparing surgery that does not leave macroscopic residual disease being the treatment of choice in many cases [9]. Recurrence rates after surgery range from 60 to 85%, hence highlighting the importance of regular follow-up and monitoring [2].
Radiotherapy is considered for patients who are not good surgical candidates, or when there remains gross residual disease postoperatively. Systemic therapy is used at relapse; non-cytotoxic drugs are used first-line in situations of acceptably low risk, while cytotoxic chemotherapy is indicated at disease progression or in clinically urgent situations. Non-cytotoxic options include anti-oestrogens (e.g. tamoxifen), prostaglandin inhibitors, non-steroidal anti-inflammatory drugs (NSAIDs) and imatinib [2,5]. Possible chemotherapeutic regimens for advanced desmoid tumours include doxorubicin + dacarbazine, liposomal doxorubicin, vinblastine + methotrexate, vinorelbine + methotrexate and vincristine + actinomycin-D + cyclophosphamide [5].
Prognosis depends on the stage of desmoid tumours (I to IV) with five-year survival being 95% at Stage 1 (asymptomatic, <10 cm maximum diameter, and not growing) and 76% at Stage IV (severely symptomatic, or >20 cm, or rapidly growing) [10,11].
Patient perspective
The patient described an initial period of anxiety until a definitive diagnosis was reached, and when disease progression was initially noted. Once cytotoxic therapy was started and repeat imaging showed stable disease, the patient felt more comfortable and content.
Conclusion
Since desmoid tumours are common in FAP, patients should be examined regularly post-panprocotocolectomy for any signs suggesting the presence of intra-abdominal desmoid tumours. The patient most likely had a diagnosis of Gardner's syndrome, as suggested by the epidermal cysts. This is a distinctive phenotype of FAP, and the presence of desmoid tumours is well-documented. The patient was initially treated with non-cytotoxic therapy, however cytotoxic therapy was started at disease progression. Vinblastine + methotrexate combination therapy had a good response.
Conflict of interest
The authors declare no conflict of interest.
Ethical approval
Not relevant.
Consent
Not relevant since there are no identifying images.
Author contribution
Sarah Xuereb is the corresponding author. She is a final year medical student at University of Malta Medical School, and she was responsible for the writing of the case report together with Rachel Xuereb and Chiara Buhagiar, final year medical students at the University of Malta Medical School. Jonathan Gauci is a Higher Specialist Trainee in Respiratory and General Medicine who was responsible for the care of the patient while training at Sir Anthony Mamo Oncology Centre, Mater Dei Hospital, and was involved in the writing of the case report. Claude Magri is a Consultant in Clinical Oncology at Sir Anthony Mamo Oncology Centre, Mater Dei Hospital who was responsible for the primary care of the patient from the Oncology aspect, and supervised the writing of this case report. | 2018-05-08T17:35:32.369Z | 0001-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "e12354284fe139edf43a467b7f169c4d5557cf15",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijscr.2016.11.052",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2fb9d059252b1b119bc6ad7aa4636342fb67b0ff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15922971 | pes2o/s2orc | v3-fos-license | Intervention fidelity in a school-based diet and physical activity intervention in the UK: Active for Life Year 5
Background Active for Life Year 5 (AFLY5) is an educational programme for Year 5 children (aged 9–10) designed to increase children’s physical activity, decrease sedentary behaviour and increase fruit and vegetable intake. This paper reports findings from a process evaluation embedded within a randomised controlled trial evaluating the programme’s effectiveness. It considers the fidelity of implementation of AFLY5 with a focus on three research questions: To what extent was the intervention delivered as planned? In what ways, if any, did the teachers amend the programme? and What were the reasons for any amendments? Methods Mixed methods were used including data collection via observation of the intervention delivery, questionnaire, teacher’s intervention delivery log and semi-structured interviews with teachers and parents. Qualitative data were analysed thematically and quantitative data were summarised using descriptive statistics. Results Following training, 42 of the 43 intervention school teachers/teaching staff (98 %) were confident they could deliver the nutrition and physical activity lessons according to plan. The mean number of lessons taught was 12.3 (s.d. 3.7), equating to 77 % of the intervention. Reach was high with 95 % of children in intervention schools receiving lessons. A mean of 6.2 (s.d. 2.6) out of 10 homeworks were delivered. Median lesson preparation time was 10 min (IQR 10–20) and 28 % of lessons were reported as having been amended. Qualitative findings revealed that those who amended the lessons did so to differentiate for student ability, update them for use with new technologies and to enhance teacher and student engagement. Teachers endorsed the aims of the intervention, but some were frustrated with having to adapt the lesson materials. Teachers also a reported tendency to delegate the physical activity lessons to other staff not trained in the intervention. Conclusions Fidelity of intervention implementation was good but teachers’ enthusiasm for the AFLY5 programme was mixed despite them believing that the messages behind the lessons were important. This may have meant that the intervention messages were not delivered as anticipated and explain why the intervention was found not to be effective. Trial registration ISRCTN50133740.
Background
Physical activity has been associated with lower levels of cardio-metabolic risk factors, improved mental well-being and a lower risk of obesity in young people [1]. Fruit and vegetable consumption is associated with lower caloric intake and a reduction in the risk of many forms of cancer and heart disease among adults [2][3][4][5][6][7], with patterns of consumption being established in childhood [4,8]. Many young people in the UK do not meet the current recommendation of an hour of physical activity on most days of the week [9,10] and do not consume the recommended 5 fruits and vegetables per day [11]. Developing strategies to increase young people's physical activity and fruit and vegetable consumption and decrease sedentary behaviour is an important public health priority.
Schools provide opportunities to reach the majority of children and as such have enormous potential to provide physical activity and nutrition focussed public health interventions [12]. Recent school-based interventions designed to improve diet and physical activity and decrease sedentary behaviour have reported only modest positive outcomes [3,[13][14][15] so more effective interventions are required. When designing new interventions it is important to learn from previous successes and failures [16][17][18] and assessing intervention fidelity is a key component of this process.
Implementation fidelity is the extent to which an intervention is delivered as expected [18][19][20][21][22][23]. Assessing fidelity [19,20,22,[24][25][26] can help determine: whether and how variations in delivery occurred [19,23]; whether or not the intervention was likely to be effective [19,22]; and how variations in delivery may have affected intervention outcomes [19,23]. Once fidelity has been established the sustainability of the intervention can be considered. An important aspect of sustainability is the potential level of fidelity of implementation when the intervention is carried out in a 'real world' setting [22,24]. Thus, in order to assess the sustainability of the intervention it is necessary to consider how variations in delivery occurred and identify aspects of the intervention that can be modified without compromising the underpinning theory and 'spirit' of the intervention or having an adverse impact on effectiveness.
Active for Life Year 5 (AFLY5) was a cluster randomised controlled trial in state primary schools designed to increase physical activity and fruit and vegetable intake while also decreasing sedentary time. The AFLY5 intervention comprised of training for Year 5 teachers and Learning Support Assistants on how to teach the intervention which consisted of 16 lesson plans, 10 homeworks in which the children were encouraged to work with their parent, two parental information leaflets, and inserts for school newsletters. Table 1 lists the intervention elements, resources provided and the delivery timetable. Control schools were offered the opportunity to receive the training and intervention materials after the final follow-up measurements were taken in the summer of 2013.
The effectiveness evaluation of AFLY5 found that there was no difference in the primary outcomes of accelerometer measured physical activity, sedentary time or self-reported fruit and vegetable consumption among children in intervention compared to control schools, though there were beneficial effects with respect to reducing self-reported consumption of high energy drinks, snacks and screen-viewing time [27]. This paper reports findings from a process evaluation embedded within the randomised controlled trial evaluating the programme's effectiveness. It considers the fidelity of implementation of AFLY5 with a focus on three research questions: Findings from the process evaluation not included in this paper are reported elsewhere, Lawlor D, Kipping R, Anderson E, Howe L, Chittleborough C, Moure-Fernandez A, Noble S, Rawlins E, Wells S, Mytton J, et al: Active For Life Year 5: A cluster randomised controlled trial of a primary school-based intervention to increase levels of physical activity, decrease sedentary behaviour and improve diet. Forthcoming [28].
Methods
The AFLY5 trial was conducted in Bristol and North Somerset in England and ran from May 2011 to July 2013 [27]. The intervention was delivered from September 2011 to July 2012 to children in Year 5 (aged 9-10). Sixty schools took part, with 30 schools randomly allocated to the intervention and 30 to the control arm. One school randomised to the intervention arm later decided not to deliver the intervention but agreed to participate in trial data collection. All children in both the intervention and control schools took part in a series of measurements at baseline (Year 4), post intervention (Year 5) and at follow-up one year later (Year 6) (see the trial protocol [29] and statistical analysis plan [30] for more details).
Process evaluation data were collected at three different phases of the project. Phase one started before the intervention began; phase two ran during the intervention; and phase three began once the intervention had ended. Details of the number of participants in each of these phases, and the methods used to collect and analyse the data are provided below and summarised in Table 2. Data were collected to evaluate the fidelity of intervention delivery, explore whether teacher, parent and child responses were consistent with how the intervention was theorised to act in our logic model and any potential barriers to wider dissemination should it prove effective.
Ethics and consent
The study was approved by the Faculty of Medicine and Dentistry Committee for Ethics at the University of Bristol (reference number 111253). All adult participants provided written informed consent, while children gave written informed assent. (In England children under 16 cannot legally give consent so when they agree to participate in research it is described as providing assent) [32].
Sampling
Phase 1: The five teacher training sessions, which involved all participating intervention schools, were observed. All participants were invited to provide structured feedback by questionnaire.
Phase 2: Observations of AFLY5 lessons being taught were carried out in all the intervention schools. The intention was to visit at least one class in every school at least once, and conduct two observations of each of the sixteen lessons being taught.
Phase 3: All teachers trained in delivery of AFLY5 were invited to participate in a semi-structured interview. Six intervention schools were purposively selected to ensure schools from localities with differing levels of area deprivation and with differing levels of teaching standards were represented. All study schools were defined as being in an area of high, medium or low deprivation by splitting them into thirds based on their score on the English Index of Multiple Deprivation 2010 (IMD 2010) and into having high or low levels of teaching quality defined by Office for Standards in Education (Ofsted) Scores [33]. (Ofsted is responsible for nationally assessing teaching quality in English schools and we used the schools' Ofsted scores at the time of entry into the study in July 2011.) The Ofsted scores were Outstanding or Good (high teaching quality) and Satisfactory or Inadequate (low teaching quality). This process created 6 groups. Initially one school per group was randomly selected and approached for recruitment into Phase 3 of the study. If they declined or did not respond another was randomly selected. When recruitment slowed, all remaining schools were invited to participate on a first come first served basis. We aimed to interview two to three parents per school, and if more parents than required offered to take part in an interview, parents were selected to ensure a mix of: parent and child gender and children from across the Year 5 classes (if appropriate).
Data collection
Details of all aspects of the data collected and the levels of response can be found in Table 2. In phase 1 all teachers and Learning Support Assistants who attended the training were asked to complete the training evaluation questionnaires. In phase 2 lesson observations were arranged at a convenient time for the teachers. Data collected during observations of the teacher training and lessons were largely qualitative as they mainly comprised the detailed notes written by the researcher describing what took place, with the researcher paying particular attention to specific topics such as level of engagement of those being trained or taught, questions asked and the suitability of the content for the ability of the group being taught. All teachers involved with the delivery of AFLY5 were asked to complete intervention delivery logs and return these to the research team. These logs sought information on, for example, who taught the lessons, whether or not they had been trained in the intervention, the date each lesson took place and how long it lasted. Teachers were also asked to Semi structured interview questions on the following topics: ·What contributes to a healthy lifestyle both generally and for children ·Teaching health promotion in schools ·Whether they were involved with any health promotion projects ·Whether school-based health promotion education is effective in changing children's behaviour *Their experience of teaching AFLY5 Audio recording/ transcript Teachers Copies of all the instruments used in the process evaluation can be found in the AFLY5 process evaluation plan [31] record details of any amendments that they made. In phase 3 semi-structured interviews were conducted with Year 5 teachers in intervention schools. Topics included, for example, what they thought contributed to a healthy lifestyle, teaching health promotion in school and their experience of teaching AFLY5 lessons. Year 5 parents from each of the six intervention schools were invited to participate in an interview and asked what they thought contributed to a healthy lifestyle, whether they were aware of their children having been involved in any lessons at school about healthy lifestyles and if their children had brought home any homeworks on healthy lifestyles. With the exception of two of the five teacher training sessions that were observed by a colleague, all observations and interviews were undertaken by ER who was not involved in the AFLY5 intervention development, a point which was made clear to participating schools.
Data preparation
Data were extracted from the teacher training evaluation questionnaires and teacher intervention delivery logs and entered in an Access database. All handwritten teacher training and lesson observation notes were typed onto a structured pro-forma and interviews were digitally recorded and transcribed in full. Data from observations, interviews, and free text from the teacher logs and evaluation questionnaires were entered into NVivo10 (NVivo qualitative data analysis software; QSR International Pty Ltd. Version 10, 2012).
Data analysis
To assess data consistency when calculating the number of amendments made to lessons, a table was created to compare data from the teacher logs and lesson observations on a lesson by lesson basis. Descriptive summary statistics (means or medians, standard deviations or interquartile ranges and/or percentages) were calculated using an on treatment approach for the following quantitative variables: (i) training participants confidence teaching the nutritional lessons (ii) training participants confidence teaching the physical activity lessons (iii) lesson preparation time in minutes (iv) lesson preparation time the same as or different from usual (v) number of lessons delivered (vi) the number of children receiving a lesson (vii) number of weeks over which lessons were delivered (viii) number of teachers providing AFLY5 lessons in each school (ix) training status of those delivering AFLY5 lessons (x) whether or not lesson were amended and (xi) number of homeworks given out.
Qualitative data were analysed by ER in NVivo10 using a thematic approach [34]. Codes were generated both from the topics in the interview guides as well as iteratively from the data, initially discussed with RK, and were categorised as a series of themes. The themes were discussed, refined and agreed by ER, RJ, SW and RC. They are illustrated in this paper by selected, anonymised quotes which typify the data from interviews or text extracts from observation notes or teacher's logs.
To maintain independence data analyses were undertaken by ER, SW, RJ and RC who were at that time blinded to the outcome of the main trial [35]. The analysts of the trial effectiveness data did not discuss results with anyone and submitted the final draft effectiveness paper to the chair of the AFLY5 steering committee. The analysts of this paper similarly submitted their final draft to the trial steering committee chair before seeing the results from the effectiveness paper.
Results
Quantitative implementation fidelity findings on training and preparation, dose (number of lessons taught), reach (proportion of children receiving the lessons), and lesson amendments are presented below. The qualitative data, from interviews with 20 teachers (recruited from the pool of 29 intervention schools) and 14 parents (from the sample of six intervention schools), provide a more nuanced picture to contextualise the quantitative data. Quantitative and qualitative data on the modernising amendments made to the lessons in terms of length, differentiation for ability and increased engagement are presented before the final section which focuses on the teachers' views of the intervention.
It was not possible to recruit a sample of six intervention schools for each of the six possible combinations of IMD and OFSTED score, however, there was an equal number of schools with high and low levels of teaching quality (three in each group) recruited, and two schools per IMD group (low, medium and high) were recruited.
The extent to which the AFLY5 intervention was delivered as planned Training and preparation 44 teachers from 29 schools delivered the intervention. 43 participants attended the training; 42 of these were teachers and 1 was a learning support assistant. Data from the teacher training evaluation questionnaires indicated that 42 of the 43 (98 %) teachers/teaching assistants who attended training agreed or strongly agreed with the statements "I feel confident that I can teach the nutrition sessions as per the lesson plans" and "I feel confident that I can teach the physical activity sessions as per the lesson plans". Teacher interview data indicated that, on the whole, they appreciated having the opportunity to work through the programme during the training and, in particular, the opportunity to receive instruction on the physical activity component: I think the training we got when we came for the Active for Life was really, really helpful 'cause it certainly pointed out a few things to us […] about like how easy it was to run different activities School 15, teacher 2 interview […] I really liked the physical exercise training. And the activities that were supplied. Thought that was really good and gave me lots of ideas. I still use them, even though I've moved on to a different class[…] I thought the lady who ran the course was quite inspirational.
School 50, teacher 3 interview
Using data reported by the 39 teachers who noted at least one lesson preparation time in the teacher log, the median length of lesson preparation time calculated across lessons delivered was 10 min (IQR [10][11][12][13][14][15][16][17][18][19][20]. Data from 38 teacher logs relating to 450 lessons showed that for 47 % of the lessons teachers indicated that more preparation than usual was required, for 15 % less and for 38 % the same amount of preparation time as usual was required. During the interviews several teachers indicated negative feelings about the extra preparation time required and noted that it was often required for the physical activity sessions:
Reach and dose
The reach, or percentage of children receiving all of the lessons taught, calculated from teacher log data, was 95 %. The timeframe over which the lessons were delivered was a median of 17.7 weeks (IQR 9.1-23.3). The teacher logs indicated that there were two main patterns of delivery: a) a regular dose, fairly evenly spread out; and b) a varied dose that changed in response to lack of time, curriculum or engagement issues. As one teacher explained during an interview, they delivered AFLY5 in a variable dose because of the length of time required to deliver the AFLY5 programme and the potential for diminishing engagement over time: .....it went over a term or well over one term, normally every term's like a fresh start, something completely different […] so they need that chopping and changing 'cause otherwise […] they'd hate it and that's with anything, that's not just with Active for Life.
School 56, teacher 2 interview 41 of the 44 teacher logs were completed and returned to the study team. The remaining 3 teachers were contacted by telephone and asked to provide only the number of lessons delivered and (if possible) the dates of delivery. Data from these 44 teachers showed that the mean number of lessons delivered was 12.3 out of all 16 lessons (s.d. 3.7, median 13.5 lessons, range 1-16 lessons) which equates to 77 % of the intervention. The mean number of physical activity lessons delivered was very similar to that for nutrition lessons. Of the 41 teachers that returned teacher logs, and indicated that they had delivered some of the intervention, all of them delivered lesson 1, but delivery declined over the intervention period such that only 46 % delivered lesson 16. Seven teachers out of 41 (17 %) delivered all 16 of the lessons. The data from the teacher logs and interviews revealed that by far the most commonly mentioned reason for not delivering all the lessons was lack of time to fit all the lessons into an already full curriculum. This is explored further using data from the teacher interviews in the section 'what amendments were made to the intervention'.
The mean number of homeworks delivered, calculated from data given in the teacher logs, was 6.2 out of a total of 10 (s.d. 2.6, range 2-10) equating to 62 %. Teachers who did not hand out the homeworks stated in both the teacher logs and interviews that they had to prioritise core skills homework above those from the intervention: All our homework is literacy + numeracy at the moment, building up to end of year tests.
School 51, teacher 1, lesson 11-written extract from teacher's log
The homeworks were designed to reinforce learning covered in the lessons and encourage parental involvement. In interviews with the parents five of 13 interviewed stated with certainty that their children had received AFLY5 lessons, and could remember homework items that were definitely part of the AFLY5 programme. Other parents were unsure about whether their children had received the lessons and homeworks.
Training status of those delivering the AFLY5 lessons
Of the 494 lessons with data on who delivered the lesson 386 (78 %) were delivered by someone who had received the training. Of the 108 sessions recorded as taken by staff not trained in the intervention, 25 (23 %) were covered by a main class teacher, 20 (19 %) were taken by Preparation, Planning and Assessment cover staff who enable teachers to work away from the classroom, 13 (12 %) were delivered by supply teachers, 9 (8 %) by student teachers, 9 (8 %) by teachers whose status was not recorded, 1 (1 %) by a learning support assistant and the remaining 31 sessions (29 %) were taken by people whose status was not recorded. AFLY5 lessons were seen as suitable to hand over, since the lesson plan, worksheets and homeworks were prepared. As data from the teacher logs and interviews revealed, the lessons that were handed over to these staff members, who were not trained in the use of AFLY5, were often the physical activity or physical education (PE) lessons I only taught a few lessons my PPA cover took the majority […] He did more the physical the activities, because he was taking PE.
Amendments to the AFLY5 intervention
There was no guidance in the written materials provided to teachers about amending or adapting the lessons but they were told at the teacher training sessions that they should teach the lessons in the order that they were listed but that they could amend content as long as the message and learning outcomes remained the same. Observations of the teacher training sessions indicate that teachers were already considering, at that stage, how the lessons could be adapted.
As they sit back down the teachers discuss how they might need to adapt a lesson for their own classes. One teacher is heard to say: "My kids can't read so those work cards won't work".
Observation notes, teacher training session on 'Physical Activity games' held on 27/09/2011
The participants (teachers) engage well with Trainer 4, making comments or asking questions throughout the session about how particular activities might work in their classes or how they might adapt the games.
Observation notes, teacher training session on ' A Safe Workout' held on 03/10/2011 Data from 39 teacher logs when cross referenced with 30 lesson observations, revealed that a total of 468 sessions had data showing whether or not the lesson was amended and that 28 % were amended. A majority (89 %) of the teachers amended the resources or lesson content on at least one occasion and each of the 16 lessons was amended by at least one of the teachers. Comparisons between lesson observations and teacher intervention delivery logs revealed that some teachers did not record amendments that were noted during the lesson observation. Of the 20 occasions where the teacher stated that they had not amended the session the observation indicated that amendments had been made in 9 (45 %) of these sessions.
Reasons for amendments to the AFLY5 intervention
During interviews with 20 teachers from intervention schools (9 of whom were from schools included in the process evaluation), those who reported amending lessons said that they did so because they felt that the lessons or resource materials did not fully meet their needs. The reasons for their adaptations fell into four main categories: adjusting length of lessons to suit the overall ability level of a particular class; a need to differentiate for differing ability; conversion for use with new technology; and making the lessons more appealing to children to ensure their engagement.
Length of the lessons
The restrictions of fitting the lessons into the curriculum meant that lessons had to be altered according to the needs of children. However, the teacher's perception of the children's ability or interest in the lessons themselves also led to amendments to lesson length. As a teacher explained, it was a case of assessing their children's needs almost on a lesson by lesson basis rather than applying the lessons as laid out in the plan: Just because when we looked at them, we go, there's no way it's going to take that long, I guess it's knowing your children, knowing what to do. ........
And we realised that it wouldn't, you know, what was a fifty minute lesson, you probably run that in half an hour.
School 56, teacher 3
Differentiation to take account of ability Amendments to the lessons and resource materials were also needed to differentiate for children with lower levels of ability, Special Education Needs or for whom English was an Additional Language (EAL). These amendments varied from class to class, although there were a reportedly large number of changes needed relating to the mathematical content, such as calculating the time spent on certain activities or the amount of sugar in certain drinks, as well as to the literacy content as some of the vocabulary was deemed to be too complicated. As this teacher explained: I did like what the 'Active for Life' was trying to do, it didn't quite fit our curriculum really, and the materials were far too complicated […] because of the EAL issues.
Conversion of materials for use with new technologies
Amendments due to teaching style most often consisted of new slides that were compatible with interactive whiteboards. As one teacher explained when asked if they had made any amendments to the AFLY5 materials:
We used Active Inspire [interactive whiteboard][…]
when we were teaching the lessons just to get it in a kind of format that we can use, just to make it a bit more user friendly.
Engagement
Several of the amendments under the category of engagement could also be seen as 'user friendly' changes, since they were primarily to make the lessons or resource materials more interesting, for either the teachers, the children or both. Amendments in this category included altering activities to include new aspects such as writing poems, making up raps, creating posters or doing role play. As one teacher explained, in relation to the nutrition lessons: I just changed them, made them more fun. They were really boring.
School 50, teacher 3 interview
The idea that these lessons or materials needed to be made more 'fun' was mentioned in both the interviews and teacher logs, and was part of a theme identified in the qualitative data which indicated that teachers were unenthusiastic about the teaching materials in their original format because they felt that they were old fashioned.
Teachers' response to the intervention
While the quantitative fidelity of implementation data indicate that the AFLY5 intervention was well implemented, the interviews and teacher logs revealed a mixed view of the intervention. Teachers often noted the lack of time that they had to fit the lessons into an already full curriculum. This reasoning allowed teachers to present an acceptable 'public' explanation for not always implementing the intervention in full, which pointed to a structural constraint, and thus did not involve them in overt criticism of the intervention programme. There was a sense both during the interviews and in the analysis of the transcripts, however, that sometimes lack of time really meant lack of enthusiasm to make time or only to be fitted in when there was extra time. Teachers were not wholly negative or positive about the intervention; the vast majority of responses were mixed: So if anything this year we sort of almost missed it in a way because it was quite good at sort of, you know, filling, when we had little bits of time, pockets of time, we could, we could squeeze it in.
School 50, teacher 1a interview
This did not mean, however, that teachers disliked the overall purpose of AFLY5, on the contrary they often mentioned how the messages behind the lessons were laudable but that there were presentational issues. As these teachers explained: It's an amazing initiative, I think it was really, really important but it was just a huge amount to get through.
School 56, teacher 2 interview
So we did, a lot of the ideas were very good. But I just felt that the whole programme needs updating.
School 51, teacher 2 interview
Problems or concerns with the resources provided as part of AFLY5 were mentioned by many of the teachers, either for not being suited to their class, as this teacher explains: Yeah I didn't use any of your worksheets, I think I adapted every one of your worksheets.
School 10, teacher 1 interview or for being rather old-fashioned when compared to other available resources: I would suggest a DVD or website resource to support the learning[…] Although good, the resource does seem unambitious and rather old-fashioned.
School 46, teacher 1, written extract from teacher's log The fact that teachers felt they had to alter the materials and that guidance and training on differentiation for ability was not provided as part of AFLY5 meant that there was a good deal of preparation for some teachers and this could also have contributed to the narrative regarding their lack of time. The results presented earlier revealed that 46 % of teachers felt that, on average, they needed more time to prepare the AFLY5 lessons compared to the regular lessons. This is perhaps not surprising given that these were completely new lessons. One limitation of this evaluation is that we did not determine how the preparation time for these lessons compared with that for any other completely new lesson. The trend towards more preparation time for PE lessons than nutrition lessons, for some of the teachers, could also reflect a general lack of enthusiasm for PE among some of the teachers. As this teacher reveals, this meant that when they were running out of time PE components were often dropped: And I have to admit if there are any bits that I skipped it was the PE bits because we were doing PE anyway, but those required more preparation for me than a normal PE lesson.
School 10, teacher 1interview
This could be seen as part of a wider issue relating to the lack of training and lack of confidence in delivering PE experienced by some primary school teachers. As this teacher explains when describing why the AFLY5 training was so helpful: I am fairly keen on sport and PE in general but perhaps not the most confident in being able to teach it to children and stuff. So sort of taking it on board and being positive about it and seeing a sequence of lessons come about from it was actually very, very good.
School 36, teacher 1 interview
Some schools have found that one way to address this problem was to employ dedicated staff responsible for delivering PE lessons across the school years. Teachers in some schools handed over all their PE lessons to these staff and AFLY5 PE lessons were no exception. Again there was a tendency towards some teachers to handing over the PE lessons in particular: I mean the handbook is quite straightforward and he is a bit of a sports, more of a sports expert so he brought his sports expertise to it and what he tended to do was, he'd do the Active for Life lesson and then he'd finish it up with a game or something so they actually had sort of like extra PE.
Discussion
The data recorded in the teacher logs and observations of lessons presented in this paper show that AFLY5 was implemented with a good degree of fidelity. Reach was high as 95 % of children in intervention schools received lessons, 77 % of all the lessons were taught and 62 % of the homeworks were delivered. The average dose of 76.3 % of lessons compares favourably to similar school-based interventions utilising a curriculum based approach such as; Project Tomato with an average of 45 % of school lessons implemented [36], Planet Health which recorded over 70 % of lessons delivered at 5 out of 6 of the process evaluation schools [37], Eat Well and Keep Moving's figure of 71 % of lessons delivered [38] and HEALTHY PE's 87.6 % implementation rate [39].
Teachers did, however, record having to amend and adapt 28 % of the lessons and the observations suggested that teachers may have under-reported amending the lessons or had a different understanding of what constituted an amendment. While teachers voiced support for the aims of AFLY5 their views of the programme itself were more mixed. After their training in AFLY5, teachers recorded feeling confident that they could deliver the lessons, but when interviewed at the end of the intervention some reported reticence about delivering the lessons on physical activity, and a tendency to delegate this teaching to another colleague. These issues may mean that the intervention was not as well delivered as the teaching logs suggested, and that the AFLY5 intervention was less successful than it would have been had these issues been anticipated and dealt with. This accords with the effectiveness evaluation of AFLY5 which found that there was no difference in the primary outcomes of accelerometer measured physical activity, sedentary time or self-reported fruit and vegetable consumption among children in intervention compared to control schools, though there were beneficial effects with respect to reducing self-reported consumption of high energy drinks and snacks and screen-viewing time [27]. This indicates that while quantitative accounts of fidelity suggest that fidelity was good, more qualitative approaches are also needed to observe exactly what happens during the intervention delivery, and to explore the responsiveness of those involved in the delivery, if a more complete understanding of why an intervention is or is not effective is to be gained.
Wider implications
Our findings have a number of implications for the development and evaluation of public health improvement interventions for use in educational settings. Firstly, the main reason for the omission of lessons or homeworks given by teachers in AFLY5 was a lack of time and pressure to focus on core literacy and numeracy skills. Finding the time to adapt the AFLY5 lessons for their children was also problematic for teachers. Educational policy in England and elsewhere increasingly emphasizes academic attainment and support for personal, social and health education has been downgraded since the feasibility study [40]. Evidence shows, however, that health and education are inextricably linked with the more educated enjoying better health and wellbeing, and students in good health having higher academic attainment [41]. Nevertheless, the primary purpose of schools is to educate, and those seeking to improve students' health need to work closely with teachers to ensure that interventions are understood to be addressing both educational and health goals so that the time spent on health improvement interventions is not perceived as doing so at the expense of educational attainment. One way of demonstrating this is to include both health and educational outcomes measures in evaluations [42,43]. AFLY5, like many other studies, did not do this [12], but this should be regarded as an essential requirement of trials of any future health improvement interventions in schools. Co-production of interventions by teachers, public health experts, parents and children is another way of achieving this and is likely to result in greater implementation fidelity. While co-production can be challenging [44] and we are unaware of any evidence that co-production provides superior outcomes than alternative approaches, this inclusive method of intervention development intuitively seems preferable to researchers designing and then implementing interventions. The Birmingham healthy Eating, Active lifestyle for Children Study (BEACHeS) is good example of a co-produced intervention [45], which showed evidence of promise in a pilot trial [46] and is now the subject of a definitive cluster RCT [47] in which many aspects of fidelity are being carefully documented.
Secondly, while most teachers endorsed the need to improve children's diets and increase levels of physical activity, some also expressed frustration with the lesson materials which they felt were out-of-date and too generic. Teachers were particularly frustrated by the work needed to adapt the lesson plans to make them suitable for children with different levels of ability and more interactive so that they could be taught using new technologies such as interactive whiteboards. This likely reflects the rapid change in use of teaching IT relative to the considerable time period currently required to develop an intervention and rigorously evaluate its effectiveness. Materials used in AFLY5 were developed originally in the USA in the late 1990s [48], developed in 2006 for the AFLY5 pilot and feasibility study which was undertaken during 2006 -2009 [49]. Following an application for funding and further development work [50] the full-scale RCT began in 2011. This timescale highlights the need for a more flexible approach to designing and evaluating interventions and also the challenge in deciding how much to change an intervention which has been used successfully elsewhere. As suggested by Craig and colleagues pilot work should examine developmental uncertainties rather than simply being a small-scale version of the definitive trial [21]. There are already good examples of best practice when it comes to the recruitment and randomisation of schools in trials so that in future smaller scale piloting of the acceptability of intervention materials, perhaps integrated as an internal pilot stage of the main trial [51], would avoid intervention materials becoming out-dated and speed up the quest for effective public health improvement interventions.
Thirdly, our findings, like those of others [52], draw attention to the concerns that generalist schoolteachers have about teaching physical activity lessons. In our study some teachers said they valued the training AFLY5 provided on this, however, these lessons were more likely to be delegated to other staff who had not been trained in the AFLY5 intervention. Acknowledging this issue when designing physical activity lessons, and ensuring that all those likely to get involved in the delivery of such an intervention are trained in it would help to ensure that fidelity is maintained.
Limitations
The proportion of teachers who provided data and the amount of data provided by them varied considerably across schools. In the case of teacher logs, none were fully completed therefore they only provided a partial picture of what happened during the AFLY5 lessons. Again, this has implications for the design of future trials in schools as comprehensive data collection also adds to the time teaching staff have to spend on something that may not be perceived as central to their job. There was potential for bias if only those who felt particularly strongly about either the intervention, or the research process itself, agreed to take part in interviews. However, as the majority of data considered in this paper came from all of the intervention schools in the trial, and a range of views were offered by teachers and parents, it seems unlikely that such a bias has influenced our findings.
Recruitment targets for parent interviews were based on previous research and were met in all but one of the intervention schools in the process evaluation. The recruitment process itself, however, was lengthy and both the parent and teacher interviews were carried out after the intervention finished (median was 288 days after the intervention ended). This could account for the lack of detail and recall in parental accounts and in some teacher accounts. In addition, the lack of detail around AFLY5 homeworks in parental accounts may also be due to the fact that despite the homeworks being designed to ensure the AFLY5 messages made it home to families, and some required parents to assist in their completion, the intervention was designed to fit in with the current curriculum and so AFLY5 homeworks may not have been easily distinguished from other homeworks.
Strengths
The major strength of this study is the use of multiple sources of data which has allowed us to cross check information reported on the same issue. This more detailed information has enabled us to build a more complete picture of how the intervention was delivered and received. This nuanced account of how and why the teachers adapted the intervention materials would have been difficult to achieve from the data recorded in the teacher logs alone or by using questionnaires. Thus, this paper highlights the value of incorporating qualitative research methods into process evaluation. A key strength of this study is that the analyses of data were conducted with no knowledge of the effectiveness of the intervention itself. This means that our conclusions regarding fidelity of the intervention's implementation were not influenced by knowing whether or not the intervention actually worked, or vice versa.
Conclusions
While the fidelity of implementation in terms of quantity of lessons and homeworks delivered was good, the difficulties of incorporating some of the AFLY5 materials into more technologically advanced and interactive current teaching practice, coupled with pressure on teachers' time, and a need to adapt the materials to suit students' differing abilities and ensure their engagement resulted in mixed enthusiasm for AFLY5. This, together with a tendency to delegate teaching of physical activity lessons to those not trained in the intervention, may have meant that the intervention messages were not as successfully delivered as anticipated and explain why the intervention was found not to be effective. | 2018-04-03T06:20:53.301Z | 2015-11-11T00:00:00.000 | {
"year": 2015,
"sha1": "a93f68ecfa1fee02c2b4dab29bdbe40ce0189877",
"oa_license": "CCBY",
"oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/s12966-015-0300-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a93f68ecfa1fee02c2b4dab29bdbe40ce0189877",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235474679 | pes2o/s2orc | v3-fos-license | Commentary: Case Report: Abdominal Lymph Node Metastases of Parathyroid Carcinoma: Diagnostic Workup, Molecular Diagnosis, and Clinical Management
1 NET Unit, Department of Medical, Surgical and Experimental Sciences, University of Sassari—Endocrinology Unit, Sassari, Italy, 2 Section of Internal Medicine, Endocrinology, Andrology and Metabolic Diseases, Department of Emergency and Organ Transplantation, University of Bari Aldo Moro, Bari, Italy, 3 Endocrinology Unit, IRCCS Ospedale Policlinico San Martino, Genova, Italy, 4 Department of Internal Medicine, University of Genova, Genova, Italy, 5 IRCCS Ospedale Policlinico San Martino, Genova, Italy, 6 Department of Experimental Medicine, “Sapienza” University of Rome, Rome, Italy, 7 Neuroendocrinology, Neuromed Institute, IRCCS, Pozzilli, Italy, 8 Endocrinology Unit, Department of Clinical Medicine and Surgery, University Federico II, Naples, Italy, 9 Endocrinology Unit, Department of Clinical and Molecular Medicine, Sant’Andrea Hospital, Sapienza University of Rome, Rome, Italy
INTRODUCTION
In the issue of March 2021, Lenschow et al. reported the case of a 46-year-old woman with recurrent, programmed death-ligand-1 (PD-L1) negative, tumor mutational burden (TMB)-high parathyroid carcinoma (PC), who showed stable disease as her best response on imaging, and a three-fold drop in PTH after treatment with intravenous pembrolizumab (1).
Parathyroid carcinoma is a rare neuroendocrine tumour, accounting for <1% of all cases of primary hyperparathyroidism (2). While surgery represents the mainstay of treatment for both the primary tumour and metastasis, patients no longer amenable to surgical resection often receive unsatisfactory systemic therapies including cinacalcet, adjuvant radiotherapy, and alkylating agents (3).
In recent years, modulation of immune checkpoint proteins expression has been accounted as a prominent mechanism for tumour immune evasion and survival, thus paving the way for new therapeutic approaches (4). Of note, monoclonal antibodies targeting the programmed cell death-1 (PD-1)/PD-L1 and/or the cytotoxic T lymphocyte antigen-4 (CTLA-4)-B7 pathway, hereinafter collectively referred as immune checkpoints inhibitors (ICIs), have shown both clinical effectiveness and a favorable safety profile in patients with advanced solid tumours, and have been included in the treatment repertoire of several malignancies (5).
Given the remarkable results obtained by Lenschow 7) after pembrolizumab administration. The tumour was assessed as PD-L1 negative by immunohistochemistry. Mutations in the MSH2 and MSH6 DNA mismatch repair genes, possibly resulting in high replication error at microsatellite loci, were found in tumour samples through comprehensive gene profiling analysis. Therefore, the patient was deemed eligible for treatment with pembrolizumab. Immune blockade of PD-1 resulted in sustained reduction of pulmonary metastatic tumour burden, with concurrent normalization of both calcium and parathyroid hormone levels. b. We found no preliminary reports in the above-mentioned international meeting abstract repositories. c. The search in clinical trial registers revealed two active trials, one of which fully matched our aim. NCT02834013 (DART: Dual Anti-CTLA-4 and Anti-PD-1 Blockade in Rare Tumors) is a Phase 2 study evaluating the effects of nivolumab plus ipilimumab (arm I) versus nivolumab alone (arm II) in patients with rare solid tumours (94 listed histotypes, including PC). The primary outcome is the RECIST v1.1 objective response-rate. Major secondary outcomes include incidence of adverse events, best response, clinical benefit rate, overall survival, and progression free survival. The present study status is "Recruiting." However, according to a very recent update of the protocol, accrual of parathyroid gland tumours has been closed.
DISCUSSION
To date, very limited evidence is available about the efficacy of ICIs in patients with PC. With this regard, some points should be taken into account. PD-L1 expression in pre-treatment tumour samples has been proposed as a marker for clinical response to anti-PD-1/PD-L1 immunotherapy in patients with advanced malignancies, including melanoma, non-small cell lung cancer, kidney cancer, colorectal cancer, and castration-resistant prostate cancer (8,9). Notably, immunohistochemistry-assessed PD-L1 expression was found in 4/18 patients (22.2%) with histologically confirmed PC (10), thus suggesting that immune checkpoint blockade may have a rationale in the treatment of this type of tumours. While PD-L1-overexpressing tumours tend to have more intense responses, experience with melanoma suggests that PD-1/PD-L1 blockade may be beneficial also in patients with low PD-L1 expression (11)(12)(13), therefore a negative PD-L1 status assessment should not definitively preclude the use of ICIs.
There is growing evidence that the TMB can also predict response to ICIs, with the high TMB-patients exhibiting a higher response rate to anti-PD-1/PD-L1 agents possibly due to increased neo-antigen load and T cell infiltration in the tumour microenvironment (14,15). In a cohort of 16 patients with PC, Kang et al. have recently found three cases with high (>20 m/Mb) TMB through comprehensive genomic profiling (16). Given the higher response rate observed in the high TMBpatients, assessment of mismatch repair status and/or exome sequencing in tumour samples may help identify those patients possibly benefiting the most from administration of anti-PD-1/ PD-L1 agents, in this way enabling a more personalized approach to treatment. The above-mentioned cases of PD-L1 negative, TMB-high tumours benefiting from pembrolizumab therapy further support this approach.
Moreover, PD-1 and CTLA-4 are acknowledged to exert nonredundant immunosuppressive effects (17). As there is robust evidence supporting a greater efficacy of the combined PD-1/ CTLA-4 blockade over the two monotherapies in advanced solid cancer (18), the possible inclusion of patients with PC in the NCT02834013 trial is giving rise to great expectations. Of note, ICI two-drug combination therapy is under evaluation also in patients with other aggressive endocrine tumours (19)(20)(21)(22).
As a further reason of interest, hypocalcemia due to immunerelated hypoparathyroidism has been reported as a rare complication following anti-PD-1 therapy initiation in patients with non-parathyroid tumours (23,24). As a result, mitigation of hypercalcemia could be hypothesized as a beneficial adjunctive effect of anti-PD-1 agents in patients with PC, irrespective of their imaging response assessment.
In summary, currently available treatments for patients with recurrent PC are insufficient. ICIs, which are considered a milestone in oncology, may provide hope for the future therapy of this rare cancer. | 2021-06-19T13:21:19.885Z | 2021-06-18T00:00:00.000 | {
"year": 2021,
"sha1": "effce8e3402e7f5f6e0bafc7b2c5e4819af8abee",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.700806/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "effce8e3402e7f5f6e0bafc7b2c5e4819af8abee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231971382 | pes2o/s2orc | v3-fos-license | Environmental exposomics and lung cancer risk assessment in the Philadelphia metropolitan area using ZIP code–level hazard indices
To illustrate methods for assessing environmental exposures associated with lung cancer risk, we investigated anthropogenic based air pollutant data in a major metropolitan area using United States-Environmental Protection Agency (US-EPA) Toxic Release Inventory (TRI) (1987–2017), and PM2.5 (1998–2016) and NO2 (1996–2012) concentrations from NASA satellite data. We studied chemicals reported according to the following five exposome features: (1) International Agency for Research on Cancer (IARC) cancer grouping; (2) priority EPA polycyclic aromatic hydrocarbons (PAHs); (3) component of diesel exhaust; (4) status as a volatile organic compound (VOC); and (5) evidence of lung carcinogenesis. Published articles from PubChem were tallied for occurrences of 10 key characteristics of cancer-causing agents on those chemicals. Zone Improvement Plan (ZIP) codes with higher exposures were identified in two ways: (1) combined mean exposure from all features, and (2) hazard index derived through a multi-step multi-criteria decision analysis (MMCDA) process. VOCs, IARC Group 1 carcinogens consisted 82.3% and 11.5% of the reported TRI emissions, respectively. ZIP codes along major highways tended to have greater exposure. The MMCDA approach yielded hazard indices based on imputed toxicity, occurrence, and persistence for risk assessment. Despite many studies describing environmental exposures and lung cancer risk, this study develops a method to integrate these exposures into population-based exposure estimates that could be incorporated into future lung cancer screening trials and benefit public health surveillance of lung cancer incidence. Our methodology may be applied to probe other hazardous exposures for other cancers. Supplementary Information The online version contains supplementary material available at 10.1007/s11356-021-12884-z.
Introduction
In the United States, lung cancer is the leading cancer killer in both men and women (Siegel et al. 2020). An estimated 135,720 Americans died of lung cancer in 2019, accounting for 22% of all cancer deaths (American Lung Association (ALA) 2020). The Philadelphia metropolitan area, part of the Greater Delaware Valley, has lung and bronchus cancer incidence rates of 75.8 per 100,000, higher than the national rate of 59.3 per 100,000 and Pennsylvania's rate of 64.0 per 100,000 for the 2013 to 2017 timeframe (The Centers for Disease Control and Prevention (CDC) 2020; Pennsylvania Department of Health (EDDIE) 2020). Data from the National Lung Cancer Screening (LCS) Trial suggests that LCS identifies less than 50% of patients who will develop lung cancer and only 4% of individuals that are eligible for LCS seek lowdose computed tomography (Kramer et al. 2011;National Lung Screening Trial Research 2011a, National Lung Screening Trial Research 2011b. Developing methods to identify sub-populations at risk for developing lung cancer could improve the outcomes of lung cancer screening trials and inform "precision" lung cancer screening. More generally, precision public health refers to targeting valuable resources to the most vulnerable as defined by Dowell et al. (Dowell et al. 2016).
While smoking is the main cause of lung cancer, accounting for close to 85-90% of all lung cancer cases, there are other environmental risk factors that contribute to lung cancer (International Agency for Research on Cancer (IARC) 2016; Zakaria et al. 2017). These exposures likely cause lung cancer in never smokers and increase lung cancer risk in smokers (Corrales et al. 2020). For example, combined exposure to asbestos and cigarette smoke, or diesel exhaust and smoking history increases incidence of lung cancer over that associated with smoking alone (Eckel et al. 2016;Benbrahim-Tallaa et al. 2012;Garshick et al. 2006). In the concept of monitoring life-long exposure to carcinogens for cancer risk, the "exposome" was originally promoted by Christopher Wild (Wild, 2005) and has been embraced by others (Dennis et al. 2017;Rappaport et al. 2014). While measuring an individual's exposome may not be possible, an alternative is to provide a population level assessment of hazardous exposures in the environment (Baldwin et al. 2013).
Air pollution is classified by the International Agency for Research on Cancer (IARC) as a Group 1 carcinogen (i.e., carcinogenic to humans) (IARC 2013) that accounts for more than 220,000 lung cancer deaths per year worldwide and shortens survival after diagnosis (Lelieveld et al. 2015;Loomis et al. 2013). In considering the causal effect of air pollution on lung cancer, it is necessary to consider the carcinogens present in this exposure source. For example, many polycyclic aromatic hydrocarbons (PAHs) (e.g., benzo[a]pyrene, benz[a]anthracene, dibenz[a]anthracene), routinely measured by the EPA (EPA 2008), have been classified by IARC (Khadhar et al. 2010). Air pollutants also include volatile organic compounds (VOCs) that are classified as either IARC Group 1 carcinogens (butadiene (IARC, 1999a), benzene (IARC, 2018), formaldehyde (IARC, 1999b)) or IARC Group 2B carcinogens (acetaldehyde) (Seitz and Stickel 2009). The products of diesel fuel combustion are another important contributor to air pollution and consist of a mixture of gases and fine particulates such as nitroarenes, known as diesel particulate matter (DPM) (Consonni et al. 2018;Gharibvand et al. 2017;Occupational Safety and Health Administration, 2020). Although diesel technology has improved to control for many of these harmful emissions, the speciation of individual nitroarenes and VOCs which are components of the mixture allows us to distinguish between known chemicals in this mixture that may be released independently of vehicular diesel exhaust, including light-duty cars/trucks and other industrial practices that may use offroad diesel-driven machinery (El-Bayoumy et al. 1989;Enya et al. 1997). Importantly, the nitroarenes include 3nitobenzanthrone, one of the most mutagenic compounds identified in the Ames test, and 6-nitrochrysene, a potent tumorigen in the newborn mouse lung.
It is problematic that many chemicals present in the environment are unclassified by IARC or have not yet been evaluated by IARC. Unclassified status by IARC (i.e., Group 3) does not necessarily mean it is not carcinogenic to humans but merely that there is insufficient evidence to date. For example, among 695 chemicals currently listed under the EPA Toxic Release Inventory (TRI) Program (EPA, 2020a), 18.8% are unclassified or not evaluated by IARC. In the IARC classification, mode-of-action data has used a subjective analysis; by contrast, Smith et al. identified 10 key characteristics (KCs) of cancer-causing agents that contribute to carcinogenesis (Smith et al. 2016). Subsequent application of the 10 KCs determined their predictive value for chemicals classified by IARC and revealed strong evidence that multiple KCs for most Group 1 or 2A agents exist to support their classification and support the use of these KCs to capture carcinogenic risk of unknowns (Guyton et al. 2018).
The objective of this research is to illustrate a methodology to aggregate environmental exposures known to be associated with increased lung cancer risk. Using the Philadelphia region as an example, we applied geospatial approaches to (1) map hazardous chemicals in the areas of interest using publicly available air pollution data, (2) synthesize multiple hazardous chemicals into one summary measurement, and (3) identify ZIP codes with high exposure using a hazard index in order to identify populations which may benefit from increased LCS.
Study region
The study area consists of 12 counties of the greater Delaware Valley-five in Pennsylvania (Bucks, Chester, Delaware, Montgomery, and Philadelphia Counties), six in New Jersey (Atlantic, Burlington Camden, Gloucester, Mercer, and Ocean Counties), and one in Delaware (New Castle County). County boundaries for the year 2018 and ZIP code boundaries for the year 2019 were sourced from the United States Census Bureau (Census, 2020).
Environmental Systems Research Institute (ESRI)'s geospatial software, ArcGIS, was used to create spatial layers of the study area. We extracted ZIP codes which have their geographic centroid within the 12 counties, resulting in 421 ZIP codes. The WGS84 coordinate system was defined for all layers to ensure geographic consistency.
Data sources
Toxic Release Inventory data EPA's TRI program tracks the management of toxic chemicals by point source that may pose a threat to human health and the environment (Bulka et al. 2016). We downloaded TRI annual-reporting data for 1987 to 2017 from EPA's Data Mart with information on point source, and the amount of chemical emissions (in pounds) released into either water, land, or air by each reporting year (EPA, 2020b). Given that our study is investigating air pollution, we only considered air emissions by summing fugitive air and stack air emissions into one combined value. The total number of active TRI-reporting facilities varied over the years and peaked at 445 in 1990 and declined to 177 in 2017. We selected chemicals in the present analysis if they met any of the following exposomics features: (1) Is classified in IARC Groups 1 to 3 as a carcinogen. Their grouping is as follows: Group 1: carcinogenic to humans; Group 2A: probably carcinogenic to humans; Group 2B: possibly carcinogenic to humans, and Group 3: not classifiable as to its carcinogenicity to humans (IARC 2020); (2) Is one of the EPA 16 priority PAHs, as a surrogate marker of exposure to carcinogens (Hussar et al. 2012); (3) Is a component of diesel exhaust (Steiner et al. 2016); (4) Is a VOC based on 113 chemicals listed in EPA's parameter code for VOCs (EPA 2020c); and (5) Is a lung carcinogen with limited or sufficient evidence of lung carcinogenesis (Cogliano et al. 2011). In total, 201 TRI chemicals met these criteria and were selected.
NASA satellite data Publicly available satellite-derived grids were sourced from NASA for years when data were available (NASA 2020). As a measure of fine particulates that may be impregnated with carcinogens, annual global surface concentrations for PM 2.5 in micrograms per cubic meter at 1-kilometer (km) resolution were available for years 1998 to 2016 (NASA 2018). As a surrogate for traffic, global 3-year running means for NO 2 concentrations in parts per billion at 10-km resolution were available for years 1996 to 2012 (NASA 2017).
Mapping and combining cumulative exposure TRI facility locations and their reported air emissions for all available years (1987 to 2017) provided data to generate kernel density raster-level values with a magnitude-per-1-km resolution area for each of the five TRI exposomic features using the ArcGIS Spatial Analyst toolbox. NASA satellite data for both PM 2.5 and NO 2 were projected onto the Philadelphia study region as raster values for all available years. We used Raster Calculator, a built-in ArcGIS tool, to generate cumulative exposure layers for each of the NASA and TRI exposomic features by summing the grouped kernel density values across all available years. We presented maps of the density values or "heat-maps" with a color gradient ranging from a low to high emissions. Again, using Raster Calculator, we incorporated the cumulative exposure layers of each NASA and TRI features into a single combined mean exposure layer by summing the values and dividing by the number of incorporated layers. Thirty years of TRI features, 18 years of NASA PM 2.5 data, and 16 years for the NASA NO 2 data made up the combined incorporated layers. The resulting raster layer provides a gradient visualization of low to high mean combined mean exposure across all exposure sources studied.
Multi-step multi-criteria decision analysis
This multi-step multi-criteria decision analysis (MMCDA) is a risk assessment framework modified from EPA's existing multiple-criteria decision analysis (MCDA) framework (EPA 2015). The MCDA had previously been used for hazard evaluation of chemicals found in hydraulic fracturing fluids using "toxicity," "persistence," and "occurrence" criteria (Yost et al. 2017;Mitchell et al. 2013;Huang et al. 2011). The goal of the modified framework is to quantify and rank the risk of exposure to chemical mixtures emitted into the air or the environment. This approach is a way to integrate multiple exposures into one aggregate index for population-based risk estimates based on assessing specific air pollutant chemicals to derive a hazard index. This approach allows for the scoring of chemical toxicity (in some instances based on a literature search to weight the presence of the 10 key characteristics of a chemical carcinogen), persistence (volatile or non-volatile), and occurrence (amount released over time/ versus the total amount of emission over time). The MMCDA permits the development of a point system to derive a hazard index by considering the following 3 criteria: (1) toxicity of a chemical, (2) persistence of a chemical, and (3) occurrence of a chemical in the geographical area unit under study. The TRI chemicals selected for MMCDA in this study are those described in the "Data sources" section.
Chemical toxicity score The toxicity criterion consists of two sub-criteria. The first sub-criterion is based on the IARC groupings. A chemical receives a sub-criterion score of 1 point if it is in IARC Group 3, 2 points if it is in IARC Group 2B, 3 points if it is in IARC Group 2A, 4 points if it is in IARC Group 1, and 0 points if it has not been evaluated by IARC. The second subcriterion is based on the amount of evidence published in the literature regarding a chemical's carcinogenicity. Using PubChem, a publicly available online chemical database (PubChem 2020), we downloaded the title, abstract, and author information for all publications (before April 2019) associated with each selected chemical and tallied the total number of mentions of the following 10 key characteristics (KCs) of chemical carcinogenicity: (1) electrophilic or can be metabolically activated; (2) genotoxic; (3) alters DNA repair or causes genomic instability; (4) induces epigenetic alterations; (5) induces oxidative stress; (6) induces chronic inflammation; (7) is immunosuppressive; (8) modulates receptor-mediated effects; (9) causes immortalization; and (10) alters cell proliferation, cell death, or nutrient supply. In addition, for each chemical, we tallied the total number of times when the words "human," "animal," "tumor" (HATs) appeared across all publications. IARC weighs human subject and tumorigenicity findings heavily in its cancer risk assessment (IARC, 2020). The HATs score is the total number of mentions across all publications and was used to weigh the KCs.
Increased mention of the KCs and HATs likely indicates greater evidence of carcinogenicity, thus we assigned points to chemicals according to the distribution of mentions for KCs or HATs from all chemicals considered. Specifically, 1 point if a chemical belongs to the lowest 25%tile of the distribution, 2 points if it is within the 25% to 50%tiles, 3 points if it is within the 50% to 75%tiles, and 4 points if it is within the upper 25%tile of the distribution. Points were assigned separately for the KC and HATs quartiles but the higher point of the two was taken as the second sub-criterion score for that chemical. The two sub-criteria scores are then summed to yield the raw toxicity score. The maximum value for the raw toxicity score is 8. As an example, the chemical styrene is classified by IARC as Group 2A with 3 points (first sub-criteria). Styrene is also in the 2nd quartile for HATs distribution (2 points) and 3rd quartile for KCs distribution (3 points), the higher of which is 3 points as the second sub-criteria score. The raw toxicity score for styrene is 6 points by adding 3 points (the first sub-criterion) with 3 points for KCs (the second sub-criterion), the maximum of either KC or HAT criteria.
Persistence score The persistence criterion is based on whether the chemical is a VOC; since VOCs do not persist as long as non-VOCs, a VOC chemical receives a score of 0 point and a non-VOC chemical receives a score of 1 point. The criteria of persistence commonly used for EPA's MCDA also consider vapor pressure. For air toxic exposure, we used only a binary measure to estimate persistence.
Rescale raw scores The raw scores for each chemical's toxicity and persistence were then rescaled by using the following formula: S x_rescaled = (S x − S min )/(S max − S min ), where S x corresponds to the raw score for chemical x, S max is the highest observed score in the set of chemicals, and S min is the lowest observed score. S x_rescaled is the rescaled score for chemical x ranging between 0 and 1.
Risk score The final risk score for a chemical is created by summing the rescaled toxicity (0-1) and rescaled persistence scores (0-1) and ranges from 0 to 2 with the higher score indicating a higher risk. These scores serve as a relative ranking and a way of comparing risk across a set of chemicals before incorporating the occurrence of emissions to compute the final hazard index in the steps as described below.
Occurrence score The occurrence score is calculated as the fraction of a chemical released (in pounds) to a geographical unit of interest such as ZIP codes out of the total amount released for the same chemical in all ZIP codes combined. If the focus is on the occurrences of chemicals within a different timeframe or a different geographic area, a subset of the TRI database can be selected to calculate the fractions.
Hazard index Lastly, the final hazard index for each ZIP code was calculated by summing all the chemicals' occurrence fractions for the ZIP code weighted by the risk score for the chemical. That is, hazard index for ZIP code i = sum _ j (fraction of chemical j released in ZIP code i relative to chemical j released in all ZIP codes) × (risk score for chemical j). As an example, suppose three chemicals X, Y, Z were released in ZIP code 08534. The risk scores for chemicals X, Y, and Z are 1.8, 0.9, and 0.5, respectively. If 200 pounds of X, 800 pounds of Y, and 600 pounds of Z were released in 08534, and 15,000 pounds of X, 10,000 pounds of Y, 20,000 pounds of Z released across all ZIP codes combined, then the hazard index for ZIP code 08534 would be: 1.8 × (200/15,000) + 0.9 × (800/10,000) + 0.5 × (600/20,000) = 0.111
Emissions from Toxic Release Inventory
Annual TRI data from 1987 to 2017 reported the cumulative release of 268,054,248 lbs of air emissions for 110 out of the 201 chemicals that met one or more of the five exposomic features in the study area. These exposomic features were (1) IARC cancer grouping, (2) priority EPA PAHs, (3) component of diesel exhaust, (4) status as a VOC, and (5) evidence of lung carcinogenesis. Of these emissions, 11.5% (30,935,548 lbs) were from 16 unique chemicals that were classified as IARC Group 1; 2.1% (5,567,528 lbs) were from 8 unique chemicals that were classified as IARC Group 2A; 12.3% (33,054,547 lbs) were from 33 unique chemicals that were classified as IARC Group 2B; and 44.7% (119,894,567 lbs) were from 24 unique chemicals that were classified as IARC Group 3. Four chemicals on the EPA list of 16 priority PAH account for 0.4% (965,911 lbs), including 61,780 lbs reported as unspecified PAHs; and 2.8% (7,406,592 lbs) came from six chemicals listed as components of diesel exhaust. Most of the emissions, 82.3% (220,783,477 lbs) came from 44 unique VOC chemicals. Nine unique chemicals came from the list of limited to sufficient evidence of human lung carcinogenesis and accounted for 3.1% (8,309,602 lbs) of the total emissions.
Mapping cumulative exposures by feature
Kernel density maps of the five exposure features are shown in Fig. 1 a- Of the 16 priority EPA PAHs, only naphthalene, phenanthrene, anthracene, and benzo[g,h,i] perylene were reported. However, the chemical grouping of "polycyclic aromatic compounds" was used to report a significant amount of PAH emissions, creating uncertainty as to which PAHs were released. Benzo[a]pyrene, the only PAH that is a Group 1 carcinogen, was not reported in this TRI dataset, but may have been included in the "polycyclic aromatic compounds" grouping. NASA satellite imagery The cumulative emissions of exposures to the NASA data are shown in Fig. 2 a and b. Higher concentrations of PM 2.5 were along the southern New Jersey shore and along the regions corresponding to major highways, as shown in Fig. 2 a. The cumulative NASA PM 2.5 observations were highest in 19032 (Folcroft, PA), 19802 (Wilmington, DE), and 19720 (New Castle, DE). The cumulative NASA NO 2 observations were only available in a 10km resolution, and was not as precise as the NASA PM 2.5 and made pinpointing exposure at a ZIP code level impossible. The highest NO 2 levels were around Center City, Philadelphia, South Philadelphia, and the New Jersey region across the Delaware river from South Philadelphia and is shown in Fig. 2
Combined mean exposure by TRI features and NASA data
The map of combined mean exposure that incorporates all the features is presented in Fig. 3
Hazard index derived from MMCDA
Among 201 TRI chemicals selected, the rescaled toxicity score ranged from 0 (32 chemicals) to 1 (benzene, cadmium, chromium compounds, dioxin, ethylene oxide, formaldehyde, nickel compounds, phosphorus, trichloroethylene, vinyl chloride). The risk score (toxicity rescaled score + persistence rescaled score) ranged from 0 (28 chemicals) to 2 (cadmium, chromium compounds, dioxin, ethylene oxide, nickel compounds, phosphorus). Out of the 201 selected chemicals, 55.2% are VOCs. Each chemical's exposomic feature classification, KC and HATs quartile ranking, toxicity score, and risk score is shown in Supplemental Table 1. An important feature of this analysis is to provide KC and HATs quartile ranking for 119 chemicals which lack an IARC risk assessment as human carcinogens.
The fraction of exposure occurrence to a given compound varies greatly between ZIP codes. Two patterns were observed, some ZIP codes reported many chemicals with 100% fraction of occurrence, whereas several ZIP codes reported only one chemical but with 100% occurrence. ZIP code 08014 reported the most chemicals with 100% emissions (1,1,2-tricloroethane, 2-nitropropane, 2,4-dinitrotoluene, benzidine, chlordane, chloroethane, heptachlor, hexachloroethane, methoxychlor, nitrobenzene, permethrin, thiram). See Supplemental Table 2 for details of the most reported chemical emissions by ZIP codes.
The hazard index for the 421 ZIP codes in the study area ranged from a minimum of 0 (218 ZIP codes, 51.8%) to maximum of 21.12 (ZIP code 08014). A choropleth diagram of the hazard index mapped for each ZIP code is shown in Fig. 4. The median value among the 186 ZIP codes with a hazard index greater than 0 was 0.04. See the Supplemental Table 3 for details on the hazard index for all 421 ZIP codes.
Information about ZIP codes with the top 10 hazard indices, along with their emission summary, and population size is presented in DE (19720), 8,842,667 lbs released to Delaware City, DE (19706), and 9,321,891 lbs released to PES Oil Refinery Region, PA (19145). Of these ten ZIP codes, 19706 is the only one not to border or intersect a major highway. See the Supplemental Table 4 for details on total air emissions of the 201 selected chemicals for all 421 ZIP codes. The list of 201 chemicals is in Supplemental Table 1.
Discussion
We investigated hazardous air exposure (exposomics) from anthropogenic sources in ZIP codes of a major US metropolitan area using EPA's Toxic Release Inventory and NASA satellite data. Our results showed that there were varying ex- . Some of these ZIP codes may not have the highest volume of emissions but contained proportionally high occurrence of a more toxic chemical or could be due to emissions from a large variety of chemicals that are toxic.
These ZIP codes tended to be in proximity to major highways which are important contributors to traffic-related air pollution in metropolitan areas. The predominate major highway in these high-risk ZIP codes is Interstate 95 (I-95) which covers approximately 1917 miles from Florida to Maine. In Overall, the number of TRI facilities and their emissions has decreased from 1987 to 2017. This is encouraging news because facilities are either more environmentally conscious, or regulations have become more stringent. However, EPA TRI is not a complete picture of all potentially harmful emissions and comes with limitations. For example, the reporting of trade secret chemicals was not required before 2016 and 2017, adding uncertainty. The lack of information about these secret chemicals makes assessing their risk to environmental health difficult. Not all air-emitting industries are required to report chemical emissions to TRI, and not all chemicals are easily detectable. Reporting is conducted by the facility itself and not monitored directly by the EPA. Several significant industries in our study region are known to emit high levels of VOCs, NOx, and SOx but carry permits which allow them to not report to the TRI. The NASA satellite-derived NO 2 and PM 2.5 layers were only available as a shorter timeframe than the TRI information at the time of this study and limited the cumulative exposure outcome. The incorporation of other exposome data sources for this time period beyond the TRI data such as EJ screen (EPA 2018) would improve the hazard index.
The focus of this study is only on anthropogenic air pollution and lung cancer. Our analyses showed that Group 1 IARC chemicals made up 11.5% of all TRI air emissions, and VOCs consisted most of the reported emissions comprising 82.3%. Particularly hazardous VOCs (e.g., benzene, formaldehyde, butadiene, and acetaldehyde) were emitted in this study area, while certain troublesome PAHs (benzo[a]pyrene) or diesel exhaust (nitroarenes) were not. We were surprised to find that the exposure and therefore hazard indices are weighted much more in favor of VOCs than particulates such as PAH and nitroarenes. Future research could benefit from calculating evaporation rates by using vapor pressure for the volatile compounds. A significant number of PAHs were simply reported to the TRI as "polycyclic aromatic compounds," with their speciation unknown. Although EPA air monitors capture concentrations on PM 2.5 , PM 10 , and NO 2 ; hazardous air pollutants (HAPS); and volatile organic compounds (VOCs), NO/ NO x /NO y , these data were very sparse for the study region and were excluded from the current analysis. Non-anthropogenic sources such as naturally occurring radon which can affect the incidence of lung cancer, or difficult-to-capture anthropogenic sources such as traffic and airport emissions, illegal emissions, and household activities can also contribute to pollution and The hazard index generated by the MMCDA framework provided further insight into this region's exposure to lung carcinogens. An important feature of the MMCDA is the calculation of the chemical toxicity score which use KC's and HATs to assess the carcinogenicity of 119 unknowns using citation searches from PubChem. This led to a risk assessment of these chemicals as carcinogens when none was available before. By weighing the frequencies of chemical occurrence by its propensity to cause cancer and environmental persistence, different ZIP codes came to our attention. In particular, the hazard index for ZIP code 08014 (Logan Township, NJ) was flagged as nearly threefold higher than the second highest scoring ZIP code 19428 (Conshohocken, PA). The annual age-adjusted lung cancer incidence rates for Gloucester county, which contains ZIP code 08014, consistently ranks in the top 2 or 3 highest rates in New Jersey. From years 2013 to 2017 for example, Gloucester 5-year lung cancer incidence rates were 74.6 (70.4, 78.9) per 100,000 compared to 55.3 (54.7, 56.0) for the state. Knowing that the hazard index for ZIP code 08014 is so high indicates a need to further investigate the surrounding area and assess the community's health. Engaging smokers or other high-risk individuals in these elevated exposure areas to seek preventative care would be beneficial. The MMCDA developed for this study provides a novel tool in assessing a chemical's carcinogenicity in a list of chemicals which considers both chemical toxicity, persistence, and occurrence. In particular, the proposed toxicity score captures the key characteristics of chemical carcinogens that has not been done before.
The urban areas found within the study region and the TRI facilities residing within are not unique compared to other US urban regions. Tobacco smoking and human proximity to lung cancer-causing emissions is an unfortunate human condition found across the nation and globe. If association between toxic environmental exposures and lung cancer holds true, then the prescription of a hazard index or analyses similar to what we performed may improve the efficacy of LCS. This approach could be used to identify high-risk areas where the effectives of screening could be assessed. By identifying smokers and never smokers who have lived in high-risk areas of exposure for extended periods of time we can sub-stratify at risk populations for participation in LCS trials to determine if there is an increase in lung cancer detection. The use of this MMCDA tool to develop hazard indices could be used in intervention trials to persuade smokers to participate in smoking cessation programs because of their higher risk. The hazard indices could also be used in lung cancer incidence surveillance programs to inform public health officials and decision makers to implement exposure reduction programs.
This study only examined toxic air exposures within the 12 counties of a metropolitan area. Air-polluting sources located near the study region, but not captured in this study, could be a significant source for future study. Meanwhile, the cumulative exposures created from publicly available EPA and NASA satellite data sources could be expanded to incorporate more years, additional layers (from EJ screen), or larger geographic areas of study. The methodology of this work could be used to determine risk of chemical exposures associated with other types of cancer to identify populations at risk.
Abbreviations ALA, American Lung Association; HATs, "Human," "animal," "tumor"; ArcGIS, Geospatial mapping software; CDC, Centers for Disease Control and Prevention; DPM, Diesel particulate matter; KCs, Key characteristics; LCS, Lung cancer screening; MCDA, Multiple-criteria decision analysis; MMCDA, Multi-step multi-criteria decision analysis; NASA, National Aeronautics and Space Administration; NIH PubChem, National Institute of Health A database of chemical molecules and their activities; NO 2 , Nitrogen dioxide; PAHs, Polycyclic aromatic hydrocarbons; PES Oil refinery, Philadelphia Energy Solutions Oil Refinery; PM 2.5 , Particulate matter that has a diameter of less than 2.5 μm; US-EPA , United States-Environmental Protection Agency; TRI, Toxic Release Inventory; VOCs, Volatile organic compounds; WHO IARC, World Health Organization International Agency for Research on Cancer; ZIP codes, Zone Improvement Plan codes. Postal code used by the United States Postal Service are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2021-02-21T14:19:34.774Z | 2021-02-21T00:00:00.000 | {
"year": 2021,
"sha1": "cc9ebefe17ec945bc8965f242a1b919d914304c5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-021-12884-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc9ebefe17ec945bc8965f242a1b919d914304c5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256724654 | pes2o/s2orc | v3-fos-license | Molecular Mechanism Operating in Animal Models of Neurogenic Detrusor Overactivity: A Systematic Review Focusing on Bladder Dysfunction of Neurogenic Origin
Neurogenic detrusor overactivity (NDO) is a severe lower urinary tract disorder, characterized by urinary urgency, retention, and incontinence, as a result of a neurologic lesion that results in damage in neuronal pathways controlling micturition. The purpose of this review is to provide a comprehensive framework of the currently used animal models for the investigation of this disorder, focusing on the molecular mechanisms of NDO. An electronic search was performed with PubMed and Scopus for literature describing animal models of NDO used in the last 10 years. The search retrieved 648 articles, of which reviews and non-original articles were excluded. After careful selection, 51 studies were included for analysis. Spinal cord injury (SCI) was the most frequently used model to study NDO, followed by animal models of neurodegenerative disorders, meningomyelocele, and stroke. Rats were the most commonly used animal, particularly females. Most studies evaluated bladder function through urodynamic methods, with awake cystometry being particularly preferred. Several molecular mechanisms have been identified, including changes in inflammatory processes, regulation of cell survival, and neuronal receptors. In the NDO bladder, inflammatory markers, apoptosis-related factors, and ischemia- and fibrosis-related molecules were found to be upregulated. Purinergic, cholinergic, and adrenergic receptors were downregulated, as most neuronal markers. In neuronal tissue, neurotrophic factors, apoptosis-related factors, and ischemia-associated molecules are increased, as well as markers of microglial and astrocytes at lesion sites. Animal models of NDO have been crucial for understanding the pathophysiology of lower urinary tract (LUT) dysfunction. Despite the heterogeneity of animal models for NDO onset, most studies rely on traumatic SCI models rather than other NDO-driven pathologies, which may result in some issues when translating pre-clinical observations to clinical settings other than SCI.
Introduction
The lower urinary tract (LUT), comprising the urinary bladder and urethra, is responsible for the storage and periodic elimination of urine. Micturition relies on the synchronized activity of the bladder and the urethral sphincter, the functional muscular unit that controls urine flux [1,2]. Thus, the expulsive contractions of the detrusor muscle are tightly coordinated with relaxation of the urethral sphincter, to ensure efficient urine removal. Normal LUT function relies on complex networks involving neurons operating in an on-off switchlike manner and located in the peripheral ganglia, spinal cord, and supraspinal centers [1,3]. These neuronal circuits are established and matured during infancy when voluntary control over micturition is learned. Activation of these circuits allows conscious and voluntary switching from storage to voiding, influenced by the perceived state of bladder fullness and assessment of the social appropriateness [1]. Therefore, the complexity of neuronal LUT control is such, that it comes as no surprise that voluntary control over micturition is easily jeopardized in neurologic conditions affecting the central nervous system (CNS), including spinal cord injury (SCI), stroke, and progressive neurodegenerative disorders, such as Parkinson's disease (PD) or multiple sclerosis (MS) [4][5][6].
The most common urinary dysfunction arising from central neurologic disease is neurogenic detrusor overactivity (NDO), defined by the International Continence Society as "Involuntary detrusor muscle contractions that occur near or at the maximum cystometric capacity, in the setting of a clinically relevant neurologic disease. These contractions generally cannot be suppressed, resulting in urinary incontinence or even reflex bladder emptying (reflex voiding)." [7]. The region where the lesions occur is pivotal for its clinical manifestations. When NDO arises from damages in suprapontine areas (e. g. stroke, PD), symptoms reflect the blockade of tonic inhibition of the pontine micturition center, resulting from damages to supraspinal micturition pathways. These damages are mostly associated with storage symptoms, particularly manifested as detrusor overactivity, as a result of bladder outlet obstruction and urethral sphincter dysfunction [8,9].
If the injury occurs due to damages in the suprasacral spinal cord (e.g., SCI), this triggers the emergence of alternative micturition pathways, totally located at the lumbosacral spinal cord, operating in the absence of supraspinal input, and dependent on afferent C fibers [10][11][12]. Unlike what happens after suprapontine lesions, in this case NDO is often concurrent with detrusor sphincter dyssynergia (DSD), resulting in impaired bladder emptying, high residual volumes of urine and frequent episodes of urinary incontinence [5]. There are also cases when the damages may originate from both suprapontine and suprasacral lesions in diseases such as MS, for example. NDO is also likely to occur due to neurogenic inflammation of the bladder, when a physical interruption of brain-bladder circuits is not evident [13,14].
Pharmacological NDO treatment aims to reduce detrusor contractions and promote continence. It is initiated with anticholinergic drugs, with intradetrusor injections of botulinum toxin A remaining the gold standard option for refractory patients [15]. Pharmacological interventions are combined with intermittent catheterization, performed by the patient or the caregiver [16]. NDO courses have increased frequency of urinary tract infections and the risk of kidney deterioration is high [17]. Therefore, health and quality of life of NDO patients is severely compromised. Moreover, available therapies often carry significant side effects, and some may lose efficiency over time. Therefore, a breakthrough in the treatment of NDO is urgently needed. To produce a significant advance in NDO therapies, it is necessary to gain a better understanding of NDO pathophysiology and grasp molecular changes that underly this pathology, including changes in the expression of many receptors, trophic factors, and inflammatory mediators. Animal models have been critical for this, as they offer the possibility to investigate in vivo consequences of NDO, including shifts in urodynamic parameters, and identify changes in gene expression and electrophysiological properties of neurons involved in the control of LUT function [6,11]. Furthermore, by mimicking human pathology, the back translational value of animal models becomes more evident, as they allow the direct testing of new drugs and therapies before advancing research to clinical trials.
There are many studies using animal models of NDO but published results are diverse and difficult to interpret in an integrative manner. However, to propose truly innovative hypotheses, it is necessary to congregate data and critically review published results, appreciating the wealth of knowledge generated. Therefore, the present systematic review aims to collect and discuss pre-clinical literature focusing on molecular changes associated with NDO of central origin, providing an up-to-date analysis of molecular studies involving animal models to assess NDO published in the last decade. This review seeks to present a holistic view of the current findings in animal NDO models, which is essential to shape future research directions in the field and propel clinical translation findings.
Results
The fine molecular mechanisms involved in NDO emergence and maintenance are currently better understood [18], but remain challenging for clinicians and researchers. A fully effective treatment, able to revert urinary dysfunction, remains to be identified and there is ample need to improve symptomatic care for NDO patients and, as a consequence, their quality of life. Current treatment aims primarily to protect the upper urinary tract and, later on, to promote continence. However, treatments are not fully effective, carry bothersome secondary effects, such as cognitive impairment (due to prolonged use of anti-muscarinic drugs) and increased frequency of urinary infections, and are not able to revert loss of control over bladder function [19,20]. Therefore, animal models of NDO are critical to clarify NDO pathophysiology and pinpoint putative therapeutic targets.
The database search used the key words ((Animal model) OR (rat) OR (mice) OR (rabbit) OR (pig)) AND ((Neurogenic detrusor overactivity) OR (neurogenic bladder)) and identified 648 candidate studies. After the removal of duplicate records, 365 studies were screened by title and abstract. Two hundred and thirteen records were excluded: 129 were unrelated studies, 13 studies were published in languages other than English, and 20 records focused on human patients as their study population. Fifty-one publications were also excluded because they were not journal articles, and included reviews (n = 42), conference papers (n = 2), and comments or editorials (n = 7). From the remaining 152 studies, it was not possible to retrieve 5 publications. The full texts of 147 articles were screened according to the inclusion and exclusion criteria (see below), which resulted in 51 eligible studies for this systematic review. From the 147 screened articles, 2 reports were excluded, as they used aging as an NDO model but failed to indicate neuronal causes for this urinary dysfunction. Finally, 6 publications were further dismissed, as they used spinal cord injury (SCI) as an NDO model but only focused on early-stage SCI, i.e., 14 days after spinal injury, when animals are still in spinal shock [21][22][23] and there are little or no signs of bladder reflex activity [24][25][26][27]. They also failed to produce urodynamic data indicative of NDO within that time frame. The study selection process is depicted in Figure 1, and a summary of the included studies is presented in Table 1. Table 1. Summary of included studies in this systematic review. The data were extracted and sorted into the following categories: model, species, sex, induction method, urodynamic findings, changes in bladder tissue, changes in neuronal tissue, and therapies/mechanisms identified. NDO: neurogenic detrusor overactivity; SCI: spinal cord injury; MAG: myelin-associated glycoprotein; OMpg: outer membrane protein G; RGMa: repulsive guidance molecule A; NGF: nerve growth factor; DRG: dorsal root ganglion; Trka: tropomyosin receptor kinase A; AKT: protein kinase B; TRPM4: transient receptor potential cation channel subfamily member 4; NF200: neurofilament 200; S100: S-100 protein marker; TRPV1: transient receptor potential cation channel subfamily V member 1; M3: muscarinic receptor 3; TGFB1: transforming growth factor beta 1; BoNT-A: botulinum toxin-A; DSD: detrusor sphincter dyssynergia; BDNF: brain-derived neurotrophic factor; ASIC: acid-sensing ion channel; RTX: resiniferatoxin; GAP43: growth-associated protein 43; CGRP: calcitonin gene-related peptide; IPHFO: duration of intra-luminal pressure at high-frequency oscillations; GFAP: glial fibrillary acidic protein; PGE2: prostaglandin E2; HIF: hypoxia-inducible factor; TGF: transforming growth factor; bFGF: basic fibroblast growth factor; NVC: non-voiding contraction; CRF: corticotropin-releasing factor; EUS: external urethral sphincter; PGP. 9.5: protein gene product 9. Awake cystometry: -Non-voiding contractions were observed at 2 weeks post injury; bladder capacity was increased at 6 weeks; -DSD was observed 2 weeks post injury, but periodic EMG reductions that produce voiding were not observed at this time point, until 4 weeks.
Increase in BDNF at all injury timepoints: higher at 2 weeks and decreases at 4 and 6 weeks, but never returning to basal levels.
The development of DSD might be related to changes in the expression of mechanosensitive channels such as ASICs and Piezo2; changes in these channels are accompanied by changes in BDNF expression. One week after SCI, all groups presented bladder areflexia; in the severe contusion group, urinary function did not improve.
Mild contusion rats presented better scores following the third week after lesion.
Bladder function was significantly worse following severe compared to moderate compression injury. Awake cystometry: inter-contraction interval, bladder capacity, and bladder compliance were significantly increased in SCI animals treated with combination therapy, and not monotherapies; the time required for the first NVC was significantly prolonged in the oxybutynin and combination group.
Type 3 collagen, HIF-1a, TGF-β1, and FGF-β (actors involved in tissue remodeling and hipoxia) were reduced in oxybutynin and combination therapy; in mirabegron therapy, the expression of mRNA of HIF-1α and TGF-β1 was significantly reduced compared to controls.
The combination therapy of an anticholinergic agent (oxybutynin) and a b3-adrenoceptor agonist (mirabegron) elevated the bladder elastin level, reduced NVCs, and increased bladder compliance to a greater extent than the monotherapy of either drug in SCI. Awake cystometry: bladder was areflexic the first week after injury; over the following 3 weeks, maximum detrusor pressure constantly increased, exceeding the baseline measurements at 4 weeks; reduction in voiding rates and urine volumes; reduction in bladder compliance; development of DSD. Treatment with aniti-Nogo-A improved several urodynamic parameters.
After 4 weeks, animals treated with vehicles showed decrease in CRF-positive innervation of Lam X; animals treated with anti-Nogo-A antibody presented values similar to intact rats; in the IML region, both injury groups showed a reduced CRF-positive fiber density; anti-Nogo-A antibody-treated rats showed a trend for higher GABAergic values, GAD2 mRNA-positive cells decreased in L6-S1.
Anti-Nogo-A antibody treatment improved urodynamic and electrophysiological parameters in SCI animals, namely a pronounced recovery of the physiological EUS function during voiding. This is likely due to protection of spared descending fibers from the PMC sprouted below the level of the injury in a specific target region, Lam X, thereby restoring functional input from the key bladder control system. Lymphoid tissue hyperplasia; nerve markers (NF200 and S100) positive at muscular sites.
In injured spinal segments, S100 was increased and NF-200 was diminished. Awake cystometry: increased number of non-voiding contractions, showing signs of detrusor overactivity; the treatment significantly attenuated bladder dysfunction, but not to basal levels. PGP 9.5 (general nerve marker) was increased in trained rats and decreased in non-trained rats (reduced detrusor hypertrophy); NF200 afferent fiber innervation was reduced in non-trained animals; the NF200:PGP ratio was significantly lower in trained RATS; non-trained rats showed a trend for low TH density.
A multisystem neuroprosthetic training program counteracts the emergence of neurogenic bladder dysfunction and improves bladder function in rats with severe SCI. Cystometry under urethane anesthesia: increased frequency and basal pressure; decreased amplitude of contractions; treatment with botox normalized these parameters to basal conditions.
Onabot/A cleaves SNAP-25 in L5-S1 spinal segments, coursing laminae I and II of the dorsal horns. Increase in CGRP expression at L5-S1 spinal cord (laminae III and IV) and at DRG level; treatment reduced this. Increase in ATF3 (marker of neuronal stress) expression; treatment further increased this.
Botulinum toxin A improves SCI-induced NDO, acting predominantly on bladder sensory fibres. The mechanism of action of Onabot/A includes the cleavage of SNAP-25 in sensory terminals but also impairment of basic cellular machinery in the cell body of sensory neurons. Awake cystometry: bladder capacity, post-void residual urine, and the number of non-voiding contractions during storage were larger when the bladder of SCI animals was only squeezed once daily, compared with twice and thrice.
-At 4 weeks after SCI, the bladder weight was reduced in animals who had their bladders more frequently squeezed; -Levels of NGF protein in the bladder mucosa of SCI mice were higher; -Levels of NGF were lower in animals who had their bladders more frequently squeezed.
The expression of P2X2, P2X3, TRPA1, and TRPV1 mRNA was increased in SCI mice (DRG), when compared to spinal intact mice.
The post-injury bladder management with an increased number of daily bladder emptying improves the storage and voiding LUTD after SCI, associated with the decrease in bladder NGF and reductions in C-fiber afferent marker receptors in bladder afferent pathways. Reduction in the density of 5-HT-positive fibers in both lamina X and ventral horn. 5-HT density increased over time, but remains severely affected up to 4 weeks after SCI; -Decrease in CRF-positive fiber density in the intermedio-lateral column (and lamina X), but partially at 4 weeks; -Increase in CGRP density only 2-3 weeks after SCI; -Decrease in the glutamatergic neurons (VGLUT2 mRNA-positive) in the laminae I, II and III of the dorsal horn, but not in laminae IV-V and X; -Decrease in GABAergic cells (GAD2 mRNA-positive) in the laminae I, II, III, IV and V.
Detrusor overactivity is possibly influenced by the sprouting of afferent fibers of type C in the dorsal horn responding to bladder distension, while DSD might be driven by decreased bulbospinal input to and a reduced number of inhibitory GABAergic interneurons in the lumbosacral cord. Cystometry under urethane anesthesia: SCI increased duration of intraluminal pressure, high-frequency oscillations, and non-voiding contractions' frequency; These parameters were improved by P2X7R antagonist treatment.
-Increased expression of beta-actin marker; -Increased levels of urothelial P2X3 receptors; treatment with P2X7R antagonist attenuated both findings.
-Activation and infiltration of microglia in T7/T8 dorsal horn areas in non-BBG treated SCI groups.
-The density of CD11b-positive microglia cells and the percentage of activated microglia were significantly reduced in treated rats.
P2X7R antagonist (BBG) induced a significant reduction in the frequency of non-voiding detrusor contractions, which was correlated with a lower amount of activated microglia. Awake cystometry: transcutaneous tibial nerve stimulation induced fewer episodes of non-voiding contractions, a lower maximum intravesical pressure during the storage phase, a higher voided volume, and a lower post-void residual volume in SCI rats, resulting in a higher voiding efficiency; the beneficial effect in bladder urodynamics disappeared one week after the end of the stimulation period.
The unstimulated sham animals had a bigger and heavier bladder compared with animals that underwent tibial nerve stimulation.
Higher density of CGRP-positive structures in layer I and II of the dorsal horn of L6 and S1 in the stimulated group (not statistically significant).
Application of transcutaneous tibial nerve stimulation in rats early after SCI had a beneficial influence on the development of lower urinary tract dysfunction that typically arises after an incomplete SCI. Cystomery under zolotyl anesthesia: increase in contraction pressure and the contraction time; oral mucosa stem cells transplantation into the injury area ameliorated these features The transplantation of oral mucosa stem cells decreased the SCI lesion, once new tissues were increased in the surroundings of the damaged tissues, reduced apoptosis, and increased the spinal cord tissues SMA-α and Ki67 expressions; c-Fos and NGF expression in the neuronal voiding centers in SCI animals were also reduced by the treatment.
Transplantation of oral mucosa stem cells ameliorated the SCI-induced neurogenic bladder symptoms by inhibiting apoptosis and enhancing cell proliferation. As result, SCI-induced neuronal activation in the neuronal voiding centers was suppressed, showing the normalization of voiding function. Awake cystometry: 2 weeks after SCI, basal pressure, leak-point pressure, and residual urine volume increased; the detrusor was hyperactive during bladder filling, DSD occurred during voiding; bladder compliance was decreased. Four weeks of accumulated sacral anterior root stimulation of anodal block: intravesical pressure, maximum bladder pressure, maximum detrusor pressure, bladder leak-point pressure, resting pressure, and residual volume decreased, while bladder capacity and voiding volume increased.
-Bladder expression of the M2 receptor, P2X3 receptor, and NGF increased in SCI animals; decreased after 4-week electrical stimulation; -Expression of the M3 receptor and β2-adrenergic receptor decreased following SCI, increasing after 4-week electrical stimulation.
Long-term sacral anterior root stimulation of anodal block in rabbits following SCI could repair urinary function. The recovery neurotransmitter receptor expression and decreased NGF expression could be one of the mechanisms of action. Cystometry under urethane anesthesia: typical voiding contractions of the bladder were not observed in SCI rats, they were replaced by several irregular micturition waves with low amplitude.
-Detrusor hypertrophy; -Increase in mesenchyme matter; -Increase in bladder volume; -The mRNA and protein expression levels of four HCN subtypes were decreased, with the HCN1 channel being the most significant; all four HCN subtypes were expressed in single bladder interstitial cells of Cajal-like cells (ICC-LCs); -The protein levels of Trip8b, Nedd4-2, and NRSF were upregulated, while filamin A was downregulated.
Decreased bladder HCN channel expression and function induced by altered regulatory proteins are involved in the pathological process of SCI-induced neurogenic bladder. Awake cystometry: reduction in inter-contraction interval, voided volume, and voiding efficiency; increased basal pressure, threshold pressure and bladder capacity. Bladder function was improved by treatment with tanshinone IIAh methylprednisolone.
-Increased bladder weight; -Increase in thickness of bladder detrusor; -Vascular alterations, edema, and proliferation of urothelial layers; the umbrella cell layer was disrupted and a marked neutrophil infiltration to the suburothelial tissue as well as blood vessel congestion and dilation was observed; treatment with tanshinone IIA and methylprednisolone reduced these features.
-Decrease in motor neurons in the anterior horn, paired with a reduction in Nissl body conspicuity; -DRGs L6-S1 presented a large number of inflammatory cells; -DRGs L6-S1 neurons cell bodies became hypertrophic and elongated with some of the nuclei shrunken or disappeared. Some Nissl bodies also disappeared or were replaced by vacuoles. All these features were attenuated by Tanshinone IIA and methylprednisolone treatments. -Increase in collagen and reduction in smooth muscle fibers; disorganization of these fibers' distribution; -The ratio of type I/III collagen in bladder smooth muscle cells was higher than in controls. Treatment with 3-methyladenine improved the overall histological changes.
-Enlargement of the space around the nerve cells in the spinal cord; appearance of blurred nucleolus, swollen cells, and vaculose. After treatment, the number of necrotic nerve cells and vacuoles in the spinal cord tissue was reduced and the degree of inflammatory infiltration was reduced; -Increased LC3-II expression levels; treatment reduced them; -Reduced MBP expression; treatment increased them.
3-methyladenine reduces the loss of MBP and inhibits bladder detrusor dysfunction by inhibiting the autophagy response in bladder detrusor muscle cells. The inhibition of collagen fiber expression in the detrusor promotes the recovery of bladder function. Voiding spot assay: at 10 wks post-inoculation, bladder capacity, the inter-micturition interval, and bladder pressure at voiding in all groups, except for the C-RELAP group, were similar to the respective values in the control group. Mice in the C-RELAP group developed overactive bladder phenotype. This means that the C-RELAP group develop a more severe and long-lasting type of neurogenic bladder overactivity than other groups, providing evidence of some correlation between the type of neurodegenerative changes in the CNS and type of developed voiding dysfunction in CIE mice.
Increased expression of TNF-α, Increased content of IFN-γ, IL-2, TGF-β and TNF-α Decreased expression of IL-1β and IL-10 in the brain. The C-PRO group was characterized by a decreased expression of IL-1β, IL-6, IL-10, IL-17, and TNF-α. C-RELAP mice had a significantly reduced level of IL-4 in the brain. -c-Fos expression levels in the neuronal voiding centers (medial preoptic area, ventrolateral gray, pontine micturition center, and SC L4-L5) were increased; -NGF expression levels in the neuronal voiding centers were increased.
ICH-induced NLUTD rat model may be a more appropriate method to analyze NLUTD in stroke patients than a cerebral infarction model. Smooth muscle of the bladder in fetal rats with myelomeningocele is morphologically normal, while the innervation of the smooth muscle of the bladder is markedly decreased centrally and peripherally. Astrocytosis appears in a later embryonic stage, which could be related to nerve repair in the spinal cord.
Tekin et al., 2016 [75] Myelomeningocele Rat Fetuses from pregnant female rats Gavage feeding of retinoic acid at embryonic day 10 (E10) -The interstitial cells of Cajal (ICC) score of the MMC group is decreased.
The density of the ICC in the urinary bladder decreased in the neurogenic bladder developed in MMC. -Inhibition of bladder cells proliferation, due to increased apoptosis in late embryonic stage (increased cleaved caspase 3); -Increase in α-SMA mRNA; -NeuN protein expression increased with time, with no significant difference between the MMC and CRL groups from E16 to E18; however, the expression of NeuN protein was significantly lower in the MMC group than in the CRL group from E20 to E22.
Bladder dysfunction in myelomeningocele fetal rats is related to the inhibition of proliferation, promotion of apoptosis, and reduction in bladder nerve and smooth muscle-related protein synthesis.
Induction Model and Assessment of Bladder Function
LUT dysfunction is a common consequence of several neurologic diseases. The level at which they occur may provoke distinct urinary complications (Figure 2A), such as NDO [8]. The vast majority of the analyzed reports, 74%, use spinal cord injury as an NDO model ( Figure 2B). From these publications, 59% report complete transection of thoracic segments to induce SCI [26][27][28][29][30][31][32][33][34][35]38,39,43,44,46,[48][49][50]55,57,62,63], followed by spinal contusion (13%). The use of spinal hemisections was less frequent (10%) [42,47,52], as were spinal compressions (8%) [32,36,54,56] and other SCI methods (10%) [41,51,53,58] ( Figure 3). Regarding the SCI model of NDO, we found that, irrespective of the type of injury, the thoracic level was the preferred level to inflict spinal lesion, with the T8-T9 levels being particularly favored. Only one study used a lesion at a higher level (T4) [48] ( Figure 3). Other-this category includes one publication which did not specify the spinal lesion level and another study that reported a complex spinal cord injury (involving non-contiguous spinal segments). Since many studies present their spinal lesion level as a combined lesion of two contiguous segments (e.g., T8-T9), we only considered the upper segment in order to graphically represent these data. This graph only represents data reported from rodents (rats and mice) due to their similar vertebral formula. Only two studies used non-rodent animals to induce SCI, and both relied on lagomorphs (rabbits)-which have a distinct vertebral formula.
Animal Species and Sex
Concerning the animal species used in NDO models, rodents were preferred, with 71% of studies using rats. Other animals were less frequently used and included mice (23%), rabbits (4%), and non-human primates (marmosets) (2%) ( Figure 1C). The majority of studies used female animals (76%). Males were used in 16% of selected studies, while only 3% used both male and female animals. Curiously, 5% of studies did not specify which sex was used ( Figure 1D).
Changes in Bladder Tissue
Many studies presented significant findings regarding gross tissue and cellular morphology of the bladder. Animals with an NDO phenotype presented larger bladder volumes and weights than control animals. Bladder tissue was also more fibrotic in NDO animals, associated with detrusor hypertrophy, features which led to a bladder wall thickness increase. The urothelial layer was usually damaged and disorganized, and Other-this category includes one publication which did not specify the spinal lesion level and another study that reported a complex spinal cord injury (involving non-contiguous spinal segments). Since many studies present their spinal lesion level as a combined lesion of two contiguous segments (e.g., T8-T9), we only considered the upper segment in order to graphically represent these data. This graph only represents data reported from rodents (rats and mice) due to their similar vertebral formula. Only two studies used non-rodent animals to induce SCI, and both relied on lagomorphs (rabbits)-which have a distinct vertebral formula.
The second most-used animal models to produce NDO were related to neurodegenerative disorders. MS was reported in 10% of the articles reviewed and it was induced by promoting experimental autoimmune encephalomyelitis (EAE) [66,68,69] and coronavirus-induced encephalitis (CIE) [65,67]. PD was reproduced in 6% of the studies, using pharmacological induction [71,73] or genetic models [64]. Animal models associated with meningomyelocele (6%), which used retinoic acid as the induction model, stroke (4%), using middle cerebral artery occlusion (MCAO) [70] to resemble cerebral ischemia, and cerebral hemorrhage in the hippocampus to reproduce hemorrhagic stroke [72] were less frequent.
The vast majority of selected studies (71%) performed urodynamic evaluation of the animals to confirm the presence of NDO. Signs of LUT dysfunction were evident, with animals presenting NDO characteristic features: increased micturition frequency; increased number of non-voiding contractions, basal pressure, maximum voiding pressure, and threshold pressure; and high residual volumes. Moreover, decreased voiding volume, maximum flow rate, and voiding efficiency were also referred to. In cases of traumatic models (such as SCI), the setting of the NDO phenotype was preceded by a period of neurogenic shock, with little or no bladder activity, that lasted up to 14 days post-injury. In the remaining pathologies (MS, PD, stroke), NDO symptoms were present immediately after model induction.
Animal Species and Sex
Concerning the animal species used in NDO models, rodents were preferred, with 71% of studies using rats. Other animals were less frequently used and included mice (23%), rabbits (4%), and non-human primates (marmosets) (2%) ( Figure 1C). The majority of studies used female animals (76%). Males were used in 16% of selected studies, while only 3% used both male and female animals. Curiously, 5% of studies did not specify which sex was used ( Figure 1D).
Changes in Bladder Tissue
Many studies presented significant findings regarding gross tissue and cellular morphology of the bladder. Animals with an NDO phenotype presented larger bladder volumes and weights than control animals. Bladder tissue was also more fibrotic in NDO animals, associated with detrusor hypertrophy, features which led to a bladder wall thickness increase. The urothelial layer was usually damaged and disorganized, and inflammatory features were conspicuous in the NDO phenotype-leucocyte infiltration in the lamina propria, lymphoid tissue hypertrophy, and vascular congestion and rupture were described. Regarding the ultrastructure of the detrusor smooth muscle cells, some ultrastructural changes were described following NDO induction, such as mitochondrial swelling and rough endoplasmic reticulum hypertrophy.
Several molecular factors were found to play a crucial role in the genesis of NDO and their expression depends on the NDO model and on the histological layer of the bladder wall being analyzed. It is possible to find the most relevant molecular factors explored in the included studies in Table 2 (changes in the bladder) and Table 3 (changes in neuronal tissues), along with the treatments that reportedly reverted or attenuated the NDO-related expression change. In summary, neurotrophic factors are overexpressed in the bladder of chronic SCI and PD animals and underexpressed in the bladder of MS and stroke animals. Inflammatory markers, apoptosis-related factors, and ischemiaand fibrosis-related molecules are upregulated in the bladder tissue of animals with NDO, irrespective of the NDO animal model. Purinergic, cholinergic, and adrenergic receptors are downregulated, although there are some contradictory results, as well as neuronal markers.
Changes in Neuronal Tissue
Several reports indicate the occurrence of multiple changes in neuronal tissue of NDO animals, largely depending on the NDO model. As SCI was the most commonly used NDO model, the following refers to SCI, unless otherwise indicated. Following SCI, microglial and astrocyte activation was evident, along with the establishment of a pro-fibrotic scar tissue at the spinal lesion site. Gray and white matter disorganization was also reported, with neuronal cells number and their Nissl bodies being reduced. Various inflammatory cells infiltrated the lesion level. In studies using a brain injection of an active substance to induce NDO, the lesion site was reported to show gliosis and inflammatory infiltration, similar to what had been observed in SCI animals. Table 2. Expression of molecular markers in the bladder wall after NDO induction according to NDO model and tissue layer. Every molecule with a statistically significant expression variation (p < 0.05) in the included papers is present in this table. Molecules are split into categories and ordered alphabetically. BDNF: brain-derived neurotrophic factor, GDNF: glial cell line-derived neurotrophic factor, NGF: nerve growth factor, IFN-γ: interferon-gamma, IL: interleukin, TNF-α: tumor necrosis factor alpha, GAPDH: glyceraldehyde 3-phosphate dehydrogenase, EP: prostaglandin E 2 receptor, pTrkA: phosphorylated tropomyosin receptor kinase A, HCN channels: hyperpolarization-activated cyclic nucleotide-gated channel, Many molecular factors have their expression up-or downregulated after NDO induction, depending on the NDO model and on the studied neuronal structure. It is possible to find the most relevant molecular factors explored in the included studies in Table 3, accompanied by treatments that reportedly reverted or attenuated the NDO-related expression change.
Briefly, neurotrophic factors, apoptosis-related factors and ischemia-and fibrosisassociated molecules were upregulated in the neuronal tissues of SCI animals. Inflammatory markers exhibited a tendency to increase shortly after MS induction, followed by a significant decrease from basal several weeks later. Purinergic receptors and transient channels expression showed particularly contradictory results not explained by the NDO model, location within the neuronal system, or molecular analysis technique used. Axonal growth regulators, such as MAG, Nogo-A, and RGMa, were upregulated in the lumbosacral spinal cord of the animals that suffered SCI. Expression of GFAP (a gliosis-associated protein, also used as a marker for astrocytes) was evaluated in SCI, MS, and MMC models and reports indicate it was generally upregulated, particularly near the lesion site.
Discussion
Micturition relies on intact communication between supraspinal centers, the spinal cord and peripheral neurons [1]. Connections between the pons, where the pontine micturition center (PMC) is located, and the sacral spinal cord are required for efficient voluntary control over LUT function [4]. Neurologic diseases, including SCI, neurodegenerative disorders (MS or PD), meningomyelocele, and cerebrovascular accidents, may jeopardize urinary function by causing damage to these neuronal circuits [11]. Several studies using animal models of disease have addressed and discussed changes occurring in the bladder and/or the neuronal pathways governing LUT function, contributing to a better understanding of NDO pathophysiology and potential to pinpoint possible future therapeutic targets. The present review systematically analyzed several of these studies and summarized the main findings.
NDO-Driven Pathology and Induction Model
Any neurological disorder that affects the micturition areas of the central or peripheral nervous system is a possible cause for NDO. We focused on NDO resulting from injury to the CNS as this was the most common situation. Our analysis shows that the vast majority of the animal models used to study NDO are based on SCI models (74%). Models reproducing neurodegenerative disorders, including Parkinson's disease (PD) in 6% of the studies and multiple sclerosis (MS) in 10% of the articles, were less frequently reported. Animal models of meningomyelocele (6%) and stroke (4%) were reported in less than 10% of the studies scrutinized here.
Spinal Cord Injury (SCI)
High-level SCI is followed by a period of little or no bladder reflex activity [22,24], in which the neuronal communication between LUT organs and supraspinal centers is abolished [22]. Spinal shock is gradually replaced by NDO, as a result of the neuroplastic rearrangement of micturition reflexes at the lumbosacral spinal cord [77]. These rearrangements are dependent on C-fibers [11,25,78], which undergo axonal sprouting in the bladder and lumbosacral cord [26,27,79] and lower their threshold [80], resulting in NDO [11,81]. Neuroplastic changes also likely contribute to DSD, which is frequently associated with NDO and leads to increased intravesical pressures and high volumes of residual urine, associated with a high risk of urinary infections and kidney deterioration [82]. SCI was the most-replicated pathology, possibly due to the high reproducibility and homogeneity of experimental procedures and functional outcomes.
The spinal regions most frequently affected in human SCIs are cervical or high thoracic segments, due to abrupt flexion and/or rotation of the head or neck [83]. However, the analysed data indicate that most studies concerning urodynamic problems after SCI relied on low thoracic lesions. As any lesion occurring at the cervical region can result in respiratory compromise and is associated with a high mortality rate due to interruption of the bulbospinal respiratory drive [84,85], lesions of high thoracic or cervical segments are avoided. Instead, most studies refer to injuries between T8 and T10 that also cause NDO without affecting breathing.
Human SCIs mostly occur due to blunt trauma (i.e., motor vehicle crash or sport injuries), where the spinal cord is damaged by an object or displaced bone and/or tissue. Thus, in SCI studies when the goal is to search for post-traumatic lesion-associated processes, repair mechanisms, or test neuroprotective treatments, the preferred method to reproduce SCI is often spinal cord contusion [85]. However, this is not the case when it comes to urological investigations. The majority of the retrieved articles in our systematic study used complete transection models [26][27][28][29][30][31][32][33][34][35]38,39,43,44,46,[48][49][50]55,57,62,63] (58%), which may be explained by their ease of reproduction and the lower associated costs, as they do not require specific equipment. One could speculate that, when it comes to studying SCI-induced urinary dysfunction, the chosen method to reproduce SCI is not as important as it is in regenerative or tissue engineering studies. Nevertheless, recent studies have shown that the consequences for urinary function associated with transection and contusion models are, in fact, different [37,86,87]. Although more clinically relevant models, mild contusion models were used only in 20% of the articles in our search [37,45,[59][60][61]. In these cases of mild contusion, most resorted to automatic spinal cord impactors [37,[59][60][61]. These devices present reduced variability between experiments by producing a forcecontrolled impact, in which the amount of time that the impact tip remains on the tissue is controlled to the millisecond. Additionally, an attached force sensor precisely measures the force of the impact, which minimizes error introduced by specimen movement, and offers the possibility of immediately previewing any problem with the impact [85]. However, these systems are expensive and associated with high maintenance costs. The classical weight-drop method, used uniquely in one study [45], is more affordable and easier to use, though it does not present the same reproducibility, which may translate to a higher number of animals per study [85].
Other SCI protocols include incomplete sectioning of the cord with a scalpel of iridectomy scissors, the most frequent of which are spinal hemisections [42,47,52]. This model is particularly useful in studies in which the goal is to compromise a particular area of the cord. They also simulate more clinically relevant injuries when compared to complete transection, allowing comparison between injured and uninjured fibers in the same individual [88]. However, they do not consider contralateral neuroplasticity and it is more difficult to ensure reproducibility, and additional techniques are necessary to ensure injury consistency between experimental animals [85]. Compression models were the least-reported protocol to induce SCI (7%) [32,36,54,56]. These are helpful to simulate spinal canal occlusion and subsequent ischemia, which are common in clinical injuries.
Based on our data, all of the compression models used an aneurism clip. This technique provides a controlled and highly reproducible injury. It is affordable and presents the possibility of controlling lesion severities by changing the force exerted in the clip and the amount of time the lesion lasts. However, compression models are less controlled than automatized contusion protocols [85].
Multiple Sclerosis
MS was the second most-reported animal model of NDO (10% of the studies) [65][66][67][68][69]. MS is the leading cause of non-traumatic disability affecting the CNS, described as a neurodegenerative auto-immune disorder causing progressive neural demyelination and axonal degradation with a typical relatively early onset [5]. The consequences of MS for LUT function are thought to be attributed to spinal cord demyelination, which likely provokes imbalances between the inhibitory and excitatory neurotransmission between the spinal and supraspinal centers controlling the micturition reflex [89]. MS-related urinary impairments are variable, and likely correlated to the severity of MS phenotype, with detrusor overactivity noted in 50% to 90% of patients, whereas detrusor areflexia is observed in 20% to 30% [90].
Currently, the most commonly used animal model to study MS is the EAE mouse. In these animals, autoimmunity to CNS components is induced through the administration of myelin peptide fragments, which induces a rapid autoimmune reaction directed to the myelin sheath [91]. Our search identified this model as the most prevalent used in urologic investigations [66,68,69], using PLP [139][140][141][142][143][144][145][146][147][148][149][150][151] [66,69] or MOG peptides [68] to initiate an auto-immune response. The use of each of these molecules induces distinct phenotypes with differences in regional/tract specificity, the kinetics of demyelination, and motor neuron involvement [91]. The limitation of this model is related to the discrepancies in the pathogenesis of EAE compared with human MS, as these models are poor in terms of providing information about disease progression and the role of specific T cells in MS pathogenesis. Furthermore, sex-and strain-based differences are observed in the clinical course of EAE, stressing the importance of careful choosing of experimental animals to be used in terms of age and sex.
Another model used to induce MS was the CIE mouse [65,67]. In this case, the MS phenotype was induced by the injection of mouse hepatitis virus (MHV) in a single intracranial injection. The pathology progression is contingent on the amount of the virus introduced, the age of the animal, and the strain of the murine coronavirus used, which permits the control of phenotype progression. The CIE progression phenotype is more similar to the human condition, which constitutes the golden advantage of this model.
Parkinson's Disease
PD models were used in 5% of the analyzed studies [64,71,73]. PD is a neurodegenerative disorder characterized by progressive degeneration of dopamine-producing neurons in the substantia nigra of the midbrain. Together with motor symptoms, PD patients also suffer from lower urinary tract symptoms, present in 38 to 71% of the diagnosed patients [92], most frequently urgency and nocturia [93]. Loss of dopamine in the substantia nigra leads to selective depletion of the same transmitter in the striatum, accompanied by a reduction in the expression of D1 receptors in the same locations. In normal conditions, D1 receptors are involved in the inhibitory mechanisms that control storage periods [94]. Therefore, loss of D1 receptors leads to incontinence. Other PD-like pathologies, such as lesions of basal ganglia, also result in loss of voluntary control over the micturition reflex, leading to uninhibited detrusor contractions at low bladder volumes [92,94].
PD is a multifactorial disease. Most cases are thought to be sporadic, but specific genetic mutations have been linked to familial PD. Animal models for PD investigation can be classified into toxin or genetic models. Toxin-based models induce fast degeneration of the nigrostriatal dopaminergic neurons. In our search, the toxins used to induce PD were 1-Methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) [73] or 6-hydroxydopamine (6-OHDA) [71]. Due to their structural similarity to dopamine, these toxins are absorbed by dopaminergic neurons through the dopamine transporter, causing cellular degeneration of these cells [95]. Toxin models are preferred when the goal is to study the consequences of the disease, including urinary dysfunction, rather than its onset, since they are easy to reproduce, and present reduced costs. However, toxin-based models do not fully recapitulate human PD, which has a slow and progressive onset. In these regards, genetic models are more suitable, as they provide a more realistic, human-like disease onset, offering tools to study the molecular mechanisms associated with pathology onset. Nevertheless, our search found just one study relying on genetic models, in which the GM2 gene was deleted in mice [64].
Meningomyelocele
Meningomyelocele was investigated in 6% of our retrieved articles [74][75][76]. This pathology is the most severe type of spina bifida, a congenital neurological abnormality occurring when the spinal cord does not form properly due to defective closure of the caudal neuropore of the neuronal tube. As the spinal nerves controlling bladder function do not form correctly, meningomyelocele is accompanied by neurologic bladder symptoms [96], including NDO and DSD. To induce meningomyelocele, studies resorted to pregnant female rats, intragastrically injected with retinoic acid on embryonic day 10. This model is capable of reproducing the entire spectrum of severity observed in human meningomyelocele, ranging from exposure of the cord with intact neural elements to complete cord destruction [97].
Cerebral Vascular Accidents
Cerebral vascular accidents were the least-used animal NDO models in our search [70,72]. However, more than half of stroke patients, either ischemic or hemorrhagic, report symptoms of urinary dysfunction, including urinary urgency, frequency, and urge incontinence. The presence of DSD is also encountered [98]. These urodynamic symptoms may be present within 72 h of the cerebrovascular accident and, in 30% of the patients, within four weeks after that time point [99]. In our search, we found two methods to induce stroke: middle cerebral artery occlusion (MCAO) and enzymatic induction of cerebral hemorrhage. The MCAO model, used to study ischemic stroke [70], is achieved by the insertion of a filament in the middle cerebral artery, which is removed afterwards. This produces a transient ischemia followed by the restoration of blood circulation, as it happens in humans. This method avoids the need for craniotomy and its possible negative effects on blood-brain barrier permeability and intracranial pressure [100]. However, MCAO may cause subarachnoid hemorrhage, tracheal edema, and paralysis of the muscles of mastication and swallowing if damages in the external carotid artery occur [100]. The other method referred to is the induction of cerebral hemorrhage. In this case, the hemorrhage is induced by a collagenase injection in the hippocampal CA1 region [72]. Collagenase enzymatically disrupts the basal lamina of blood vessels, causing an active bleed into the surrounding tissues that generally evolves over several hours. Both methods can be adapted to injuries in any brain region.
Animal Species
Our search demonstrated that rodents were used in 95% of the retrieved articles concerning NDO. Rats were the most commonly used (73%), followed by mice (22%). Rats have the advantage of low maintenance costs, ease of care, and a well-studied anatomy [101]. Their bigger size, when compared to smaller rodents, allows for more complex surgical interventions, which is particularly important in models based on the physical lesioning of CNS areas. Several established behavioral tests, which are used to assess the loss and recovery of neurologic deficits, are better adjusted for rats than other rodents. Concerning urodynamic testing, the majority of established techniques are better studied and established in rats, providing superior testing outcomes and numerous sources of comparison [102].
However, mice models are becoming more popular and increasingly implemented in NDO studies. Morphologically, the mouse bladder appears to be more similar to humans, but the urodynamic properties of the mouse LUT have not been characterized as well as those of rats [103]. Mice offer the possibility of generating genetically modified models and have higher reproductive rates and low maintenance costs. The disadvantages of using mice are related to their smaller size, which poses problems for several induction protocols and urodynamic recordings and urethral electromyography [103].
Despite the benefits of using experimental animals to investigate NDO pathophysiology and test new therapeutic approaches, the results should be approached with caution. It is important to note some morphological and physiological differences. In rodents, the prostate is not encapsulated within a well-formed prostatic fascia [104]. Additionally, the architecture of the pelvis and pelvic floor corresponds directly to the quadruped locomotion of rodents, which is different between the two species. Functionally, there is evidence that detrusor contraction in rodents is dependent on ATP acting as a neurotransmitter, whereas in humans, it is mediated by acetylcholine [103]. These differences may affect the functional outcomes [104].
Other animal models, such as rabbits and non-human primates, may provide a more physiologically relevant evaluation of outcomes compared to rodents, particularly considering the similar size of the spinal cord, comparable neurological damage mechanisms, and higher anatomical parallel. However, the use of these non-rodent animal models is limited by maintenance costs and strict ethical requirements. Accordingly, our systematic search encountered three studies using rabbits [33,41], and one using the marmoset [73].
No studies with primates were reported.
Animal Sex
Our analysis demonstrated that more than 70% of the studies favored females to induce NDO. This is likely related to the feasibility of transurethral catheterization and manual bladder emptying, since the male urethra is surrounded by the prostatic gland, which makes abdominal compression and bladder manipulation more difficult in males [33,104]. Nevertheless, sexual dysmorphism in micturition behavior should be accounted for. In addition, one should not forget that some human pathologies might be more prevalent in one sex than the other, making data obtained in studies using only female or male animals more difficult to clinically translate. Experiments concerning the effect of the estrous cycle on rat bladder contractility have pointed to a more responsive behavior of females bladders [105] compared to males in response to cholinergic stimulation. This likely reflects sex-related differences in bladder expression of different subtypes of muscarinic and adrenergic receptors [106][107][108]. Accordingly, the cholinergic neurotransmission is predominant in the male bladder, while the purinergic component is prevalent in females [109]. Sex differences were also noted in expression of acid-sensitive ion channels (ASICs) and transient receptor potential vanilloid type 1 (TRPV1), both key channels for normal and pathologic bladder function [110]. These molecular discrepancies are likely reflected in bladder function.
Micturition patterns are also different between male and female experimental animals. Male voiding consists of a fast spike-like urine flow, whereas female voiding is ongoing but interrupted for short periods when bladder pressure is increased. The maximum flow rate is lower and voiding period is shorter in female rats as compared to male rats [111]. These dimorphic micturition patterns in rats might be attributed to the different nature of the perineal muscles of the EUS, less prominent in females [112]. Though these differences are considered to be of minor relevance in normal function, they might have a significant impact in pathologic conditions. To understand the real impact of sex on pathologic LUT function, research studies would benefit from using both male and female animals. However, this was only the case in two studies [64,73], both concerning PD models.
Urodynamic Recording
Changes in LUT function are evident after induction of SCI, MS, PD, meningomyelocele, or stroke. Animals present typical NDO symptoms, including increased voiding frequency, basal pressure, maximum voiding pressure, threshold pressure, and high amounts of residual urine. As result, the voided volume per contraction is reduced and a significant decrease in voiding efficiency is evident. Some studies also reported the presence of DSD. The majority of the retrieved articles evaluated the consequences on LUT function by using urodynamic recording techniques.
For decades, the gold standard method to perform urodynamic evaluation in animals has been using cystometry under urethane anesthesia. This drug is the most-used anesthetic for urodynamic recording, as it is recognized as the most preservative of micturition reflexes. Nevertheless, urethane interferes with urethral sphincter activity, resulting in reduction in voiding efficiency and increase in post-void residual volume [113][114][115][116][117]. Moreover, urethane anesthesia is limited to terminal procedures, due to its adverse post-operative health effects and carcinogenic risks [118]. One could speculate that recent papers would resort to better anesthetic options, but urethane remains the main choice for urodynamic evaluations [26][27][28]37,41,43,51,57], as it has low associated costs, it is easy to deliver, and there are abundant published references using this anesthetic for cystometries [102,119].
Fewer studies have used zolotyl anesthesia as an alternative for urethane [53,58,61,74]. Zolotyl is a combination of tiletamine and zolazepam and was used in 8% of the articles in our search. Zolotyl produces a smooth conscious sedation, characterized by a rapid induction period, together with excellent muscle relaxation with a wide safety margin, and smooth recovery [120]. One article included in our search used chloral hydrate as an alternative to urethane to perform cystometries [54]. This anesthetic is no longer recommendable to use, due its toxic components.
In an attempt to overcome the negative effects of anesthetics on LUT function, awake cystometries have arisen as a popular method to record bladder and urethral function in rats and mice. Unlike anesthesia protocols, in which environmental cues and diurnal variations are suppressed, it is necessary to consider that cystometries in awake conditions are influenced by external factors, such as light and noises. Furthermore, as rodents are nocturnal, the experiments must be performed during night time, or animals must be acclimatized to inverted cycles of light [102]. The variety of awake recording systems range from restrained to freely moving animal approaches [121]. In restrained animals, it is easier to manage the position of the intravesical catheter, and to prevent the occurrence of urodynamic artifacts. However, despite pre-testing habituation to the cystometry stations, restraining may cause high levels of stress, which can increase sympathetic activity, favoring storage and potentially prolonging the time of bladder filling until micturition [122]. Unrestrained conditions, using metabolic cages, closely resemble physiological conditions and are assumed to better record LUT function [123], but studies using this approach are still scarce.
For urodynamic assessment in experimental animals, it is necessary to place an intravesical catheter that allows for saline injection and/or recording of bladder contractions. These catheters can be placed acutely or be indwelling. Acutely placed catheters are suitable for terminal procedures when the recordings occur right after implantation surgery and the animal is euthanized immediately after recording. There are, however, disadvantages linked to the use of acute catheters, such as postoperative pain and the fact that the anesthetics used in the surgical procedure might affect LUT activity [102,122]. An alternative for this is the use of chronic indwelling catheters and electrodes for urethral electromyography, which can be externalized on the animal's dorsum and maintained for large periods, permitting animal stabilization after surgery. Additionally, they also allow the testing of the same animal several times during the experimental protocol, eliminating inter-animal variability and reducing the number of animals required [124]. However, these systems are associated with a high maintenance cost and complex post-operative care to maintain the functionality of the externalized components for extended periods and the wellbeing of the animals [102,124].
While urodynamic recording is the only method that can objectively assess lower urinary tract function, non-invasive methods, including the voiding spot assay, were also reported in our search [64,65,67,68]. The voiding spot test has the advantage of being minimally invasive, inexpensive, and easy to implement. However, the urodynamic data are poor, only recording voided volumes and spatial and temporal organization of urinary spots [102]. Surprisingly, this was the preferred method to evaluate bladder changes after induction of neurodegenerative disorders [64,65,67,68].
Surprisingly, a significant portion of retrieved studies did not present any urodynamic data [29,30,36,44,45,49,50,60,62,66,69,[73][74][75][76]. This was quite unexpected for studies with a focus on urinary dysfunction and NDO. One could speculate that these studies used animal models that are already established, so their effects were already known and described. These studies focused on other aspects of the disease than urinary function, including molecular alterations. In fact, the lack of any evaluation of LUT function was seen in studies using animal models of myelomeningocele, in which urodynamic recording would be difficult to perform.
Changes in Bladder and Neural Tissue Morphology
Selected studies highlighted significant findings regarding bladder and neuronal tissue morphology. Bladder tissue was generally more fibrotic in NDO animals, in tandem with findings in humans [125]. Histologically, bladder fibrosis is described as an increase in connective tissue elements, particularly in the detrusor, where collagen fibers heavily surround smooth muscle cells. These changes are driven by several molecular factors, which are upregulated in NDO bladders, and represent tissue remodeling following DSD and consequent bladder volume load increase. This functional obstruction also leads to detrusor smooth muscle hypertrophy, chronic inflammation, and edema. All these features result in an increase in bladder weight, reflecting bladder wall thickening [59]. Smooth muscle ultrastructural changes after SCI-inducted NDO were also found in the retrieved studies, such as mitochondrial swelling and endoplasmic reticulum hypertrophy [54], consistent with smooth muscle hypertrophy and increased intensity of bladder contractions. The mucosa, particularly the urothelium, also undergo plastic changes, shown to contribute to impaired urinary function in NDO models [126].
Concerning neuronal tissue, various CNS and PNS structures are affected, depending on the NDO model. SCI was the model that presented greater morphological changes, with formation of fibrotic scars at the injury site, associated with recruitment microglia, astrocytes, macrophages, and other inflammatory cells. These cells eventually fill the injury core and are involved in complex crosstalks to repair the injured tissue but prevent axonal regrowth [127]. Because SCI is the most-used method to induce NDO, the considerations below mostly refer to SCI.
Molecular Factors
Molecular changes in the bladder and neuronal tissue after NDO induction are, respectively, presented in Tables 2 and 3. These variations were detected through either protein or RNA analysis. The majority of data gathered was obtained from SCI studies.
Neurotrophic Factors
Neurotrophic factors are growth factors that play a critical role in neuron survival and regeneration, including nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF), and glial cell-derived neurotrophic factor (GDNF) [128,129]. NGF is a small molecular weight protein, involved in urinary dysfunction in several contexts, including SCI [130,131]. In the bladder, NGF is secreted by smooth muscle and urothelial cells [132][133][134], and its levels are increased in response to inflammation or denervation [135]. In animal models of NDO, bladder NGF levels also vary, increasing after SCI [29,46,48,56] and being reduced in CI and MS animals [66,70]. In the latter, the time point studied referred to chronic stages of disease progression and it is not possible to exclude increased NGF levels in the acute phase of the MS model [66]. Importantly, while it is possible to observe that, as in cystitis [134,136,137], SCI-induced NDO courses with high NGF levels [138][139][140], the same does not happen in MS models. While this likely reflects different pathophysiological mechanisms for NDO, the precise reasons can only be speculated at present. Importantly, high levels of bladder NGF coursed with upregulation of the phosphorylated form of the high-affinity NGF receptor TrkA, which was also observed in PD models [29,64].
In neuronal structures, NGF was also upregulated after NDO induction, when quantified in nervous system structures, such dorsal root ganglia, supraspinal neuronal voiding centers and the spinal cord [48,53,57,61,72]. Nevertheless, it is important to point out that this analysis was not performed on an SCI or on an MS model. Regarding BDNF, protein expression was increased in SCI animals, both in the bladder and the spinal cord [27,61], but variations of its levels in other models were not found. Changes in GDNF levels were only reported in an MS model, in which the bladder contents were found to be reduced [66]. Such changes in neurotrophic factors are likely involved in the abnormal axonal sprouting resulting in expansion of C-fibers in the bladder wall and lumbosacral spinal cord, a key event in NDO development and maintenance [12,26,27,80].
Inflammatory Mediators
Changes in inflammatory molecules, including pro-and anti-inflammatory cytokines, were described in articles using MS animal models. These mediators, such as IL-2 and TGFβ, were dramatically increased in the bladder tissue [59]. In neuronal tissue, the variation in expression levels of cytokines is complex to analyze. Inflammatory cytokines were increased after 1 week but decreased 10 weeks after MS induction [65,67]. The first week period in animal MS models may represent an acute immune event, linked to inflammatory demyelination. The reduction at 10 weeks likely reflects modulation of immune responses. These changes coursed in NDO installation and likely reflect changes in immunological activation associated with MS.
Apoptosis-Related Factors
Apoptosis is the process of programmed cell death, involving several players in complex pathways, including enzymes such as caspases. In animal models of SCI, Caspase-3 was found to be activated at the injury site, since trauma and ensuing events lead to cell death of resident and invading cells [127]. After traumatic SCI, spinal tissue at the injury site undergoes major transformations. Healing is a complex process that results in tissue remodeling, which seals the injured location. Traumatic SCI causes direct tissue destruction (compression, laceration, shearing of the cord) that results in profound histological modifications at the injured location [141][142][143]. This is followed by production of free radicals, lipid peroxidation, altered ATP production, invasion of peripheral immune cells (including neutrophils, lymphocytes, and monocytes) due to breakdown of the blood-brain barrier, activation of resident astro-and microglia, and neuronal and glial apoptosis, all of which contribute to further damage of the injured area [142]. The final step consists of the formation of a glial scar, highly repulsive to axonal growth, preventing appropriate rewiring, reestablishment of connections between supraspinal centers and lumbosacral neurons and, ultimately, full recovery [142]. While apoptosis is central at the injury site within the spinal cord, we found no reference to a direct link with NDO development or maintenance. No studies addressed the presence of pro-apoptotic elements in the bladder of NDO animals.
Muscarinic Receptors
Muscarinic receptors play an important role in detrusor contraction and they can be found in the detrusor layer and the mucosa, participating in the urothelium-detrusor crosstalk and regulating detrusor contraction [144,145]. The mRNA levels of M2 and M3 muscarinic receptors in the bladder mucosa of SCI animals (6 weeks after lesion) were downregulated, but only M2 protein levels were reduced when compared to controls [44]. Another study with rodents, not included in this review [144], showed an increase in M2 subtype transcript in the bladder mucosa 2 weeks after SCI, returning to basal by 4 weeks. There was no similar pattern in the M3 subtype. Detrusor protein expression of M2 receptors increased during chronic SCI period and the M3 subtype was downregulated [56]. In animals with cerebral ischemia, M3 was downregulated [70], but there are conflicting observations regarding M2 levels, possibly reflecting different analytic techniques [70]. Changes in the expression of muscarinic receptors may underly the lack of response of patients to anti-muscarinic therapy. This is relevant as treatment of NDO is typically initiated with anti-muscarinic drugs [9,16]. If patients do not respond to low amounts of these drugs, the dosage is increased, but only refractory patients will receive botulinum toxin A as the last-resort treatment [146,147].
Adrenergic Receptors
In terms of adrenergic receptors, the retrieved studies documented changes in their expression in the bladder of SCI animals presenting NDO. In the clinical setting, α1a adrenergic receptor (AR) antagonists have been used in multiple pathologies, such as prostatic benign hyperplasia [148] and DSD after SCI, to produce muscle relaxation and decrease urethral sphincter pressure and obstructive symptoms [60]. However, this therapy is not always fully effective, which likely reflects the downregulation of α1a adrenergic receptor expression after SCI [60]. The expression of β2-adrenergic receptors in the bladder was also studied in the context of SCI and a reduction in the bladder levels of β2-adrenergic receptors was found [56].
Purinergic Receptors and Transient Receptor Potential Channels
The importance of P2X and TRP receptors in urinary function is well established [149]. We found two contradictory perspectives on the expression of P2X purinergic receptors and TRP channel expression in SCI animals. Concerning the expression of these receptors in DRG cells, four studies [35,46,49,63] reported upregulation of TRP and P2X receptors, while another study [45] reported TRP/P2X pathway elements' downregulation. Similarly, in the bladder, three studies [30,51,56] showed upregulation of TRP and P2X elements, while one reported downregulation [45]. These different observations likely originate from different methodologic approaches, as the majority of available studies refer to the upregulation of these ion channels as key events to explain enhanced excitability of C-afferents, known to be a driver of NDO development and maintenance in SCI [11,12,78].
Neuronal Markers
Analysis of β-III-tubulin, a pan-neuronal marker, in 9-week SCI rats shows upregulation of this protein in bladder tissue, indicating the occurrence of hyperinnervation in the bladder wall and demonstrating neural plasticity and compensatory axonal regeneration [36]. This agrees with increased bladder levels of neurotrophins, such as NGF, which induce axonal growth and branching [130,131].
In the nervous system of NDO rats, the expression of several neuronal markers (namely CRF, GAD2, NF200, TH, VAChT, and VGLUT) was found to be generally downregulated, due to denervation associated with SCI, MMC, and PD, the latter more evident in the substancia nigra and ventral tegmental area [40,47,49,64,73,74]. In contrast, the expression of CGRP, a marker of sensory innervation, was upregulated in the lumbosacral spinal cord and L1 and L6 DRGs in SCI animals, in tandem with what has been described in the bladder [26] and coursing with levels of spinal NGF [140].
Ischemia-and Fibrosis-Related Molecules
This category includes not only molecular factors that play a critical role in the setting of fibrosis, such as CTGF, FGF, and TGF, but also a broader group of molecules responsible for ischemic response, such as HIF and VEGF. All were upregulated in the bladder and neuronal tissue in SCI and MS models. This indicates that, in both conditions, NDO development and maintenance are associated with intensive tissue remodeling [59].
Astrocyte-derived chondroitin sulphate proteoglycans (CSPGs)-phosphacan and neurocan-were also included in this group, since they are the central extracellular matrix components of the spinal fibrotic scar that seals the injury site after SCI [28,127]. The levels of CSPGs are highly increased at the injury site after SCI [142,150], correlating with upregulation at the same location of S100, a glial marker [41], consistent with the accumulation of glial cells and scar formation [142,150]. Importantly, CSPGs are also elevated in in segments distant from the scar [28,151], indicating a widespread response to SCI. While this upregulation of CSPG content is exhuberant at the injury site, it is more controlled and restricted in segments distant from the injured tissue, as only the expression of specific CSPGs is changed in a time-dependent manner [28,151]. CSPGs are known to be involved in axon guidance regulation [152] and it is possible that this lumbosacral upregulation might be linked to the establishment of new neuronal circuits responsible for abnormal bladder function after SCI.
Myelin-Associated Proteins
This group of molecules can be divided in two different clusters: proteins associated with the myelin sheath, such as myelin basic protein (MBP), and myelin-associated inhibitory proteins (MAIs)-MAG, Nogo-A, and OMgp. MBP was downregulated in the spinal cord, both after SCI and MS. MBP downregulation possibly reflects loss of myelin sheaths due to apoptosis of neurons and oligodendrocytes secondary to SCI [62], while its downregulation in MS models can be explained by CNS demyelination. Concerning MAI expression in the lumbosacral cord, levels of Nogo-A were transiently upregulated, without changes in MAG and OMgp, after thoracic SCI [28]. Changes in the expression of these guidance molecules may well be linked to neuroplastic events leading to NDO establishment.
Conclusions
NDO is a common consequence of neurologic injuries, with a tremendous impact on the quality of life of affected patients. Animal models of NDO have been critical for understanding the pathophysiology of the disease, as well as to study potential for recovery and implement new therapeutic targets for affected patients. In this review, we describe the currently used animal models to study NDO, and discuss them in terms of species, sex, urodynamic recordings, and molecular alterations observed in bladder and neuronal tissue. Despite the heterogeneity of NDO onset, the vast majority of studies concerning molecular mechanisms associated with this pathology are based on traumatic SCI models. However, NDO is also a consequence of several progressive diseases such as neurodegenerative disorders, meningomyelocele, and stroke, about which there is much less information. It is important that future studies focus on these disorders to provide a better understanding of the pathophysiological mechanisms leading to NDO, which is important for the development of new therapies targeting these patients' quality of life. The list of molecular changes found in the present review is vast and includes the upregulation of inflammatory mediators, molecular markers of fibrosis. Moreover, there is also significant evidence of neuronal plasticity with increased expression of neuronal receptors, neurotrophins, and myelin-associated proteins both in the bladder and neuronal tissue, supporting the wide range of neuroplastic events that result in NDO. While it is difficult to grasp and integrate the enormous number of published results, it is clear that NDO pathophysiology is complex and, consequently, its treatment and management is difficult. Like other researchers [18] and following the present review, we propose that many key players are active and interacting at different stages of disease progression. It is likely that future interventions will result from the combination of different drugs simultaneously targeting different molecules. Future research should use comprehensive strategies, possibly automated, to identify synergistic changes and key events that could be therapeutically targeted.
Literature Search
The present systematic review was elaborated following the PRISMA 2020 checklist [153]. We aimed to analyze scientific articles that addressed molecular changes associated with neurogenic detrusor overactivity. On 19 September 2022, the search was conducted in three databases: PubMed Central (via PubMed), and Medline and Embase (via Scopus). The following query was used: ((Animal model) OR (rat) OR (mice) OR (rabbit) OR (pig)) AND ((Neurogenic detrusor overactivity) OR (neurogenic bladder)). No filters were used, and the search was limited to articles published between 1 January 2012 and 19 September 2022. The year 2012 was chosen as a reference as it was the year in which some automated devices for spinal contusion became commercially available [154]. This search generated 648 results.
Selection
The studies retrieved were imported to Endnote and duplicated articles were excluded. The remaining articles were then imported to the Rayyan platform, and the remaining duplicates (not detected by Endnote) were identified and excluded. The resultant articles (365) were submitted to title and abstract screening, independently conducted by two investigators. The inclusion criteria were: (1) studies including an animal model of neurogenic detrusor overactivity; (2) studies including at least one molecular analysis technique; and (3) studies published in English. The exclusion criteria were: (1) non-original studies, such as reviews, conference abstracts, and editorials; (2) studies conducted in humans, such as case reports and clinical trials; (3) in vitro studies; (4) absence of data on molecular alterations; (5) studies using an SCI model which merely present molecular results obtained from animals euthanized 14 days after SCI induction (acute SCI); (6) studies focusing on NDO of peripheral origin; and (7) impossibility of obtaining the full-text article (even after contacting the authors). All articles were submitted to full-text screening if no inclusion criteria were absent and if no exclusion criteria were met.
To assess the risk of bias in our work, SYRCLE'S risk of bias tool was used. This checklist was adapted from the Cochrane risk of bias tool and adjusted for experimental animal studies. The tool was developed by Hooijmans et al., and focuses on evaluating selection bias, performance bias, detection bias, attrition bias, and reporting bias in animal experimental studies [155]. The Cochrane risk of bias (Rob) checklist was also consulted [156].
Data Extraction
Outcomes for which data were sought were the following: (1) study characteristics (author and year of publication); (2) used model of neurogenic detrusor overactivity; (3) used animal species; (4) animal sex; (5) model induction method; (6) urodynamic findings; (7) changes in bladder tissue; (8) changes in neuronal tissue; and (9) therapies and mechanisms identified. If any of these study characteristics were not evident from full-text analysis, authors were contacted. Basic study characteristics, such as the animal sex, are described as unknown if there was no answer from the authors. All gathered data have been included in Table 1. Two independently working reviewers extracted the most relevant data from every included article. No automatic tools were used.
Author Contributions: All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by A.F. and D.N. The first draft of the manuscript was written by A.F. and D.N. All authors commented on all versions of the manuscript. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-02-10T16:03:39.114Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "840cfdf7f9f4a79f65a3b6a38598d9bc3e1dd94b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/4/3273/pdf?version=1675762390",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "856d41e7ac51a7391fe6d215c9c0e08c5476805b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
256196693 | pes2o/s2orc | v3-fos-license | Phloretin ameliorates hepatic steatosis through regulation of lipogenesis and Sirt1/AMPK signaling in obese mice
Phloretin is isolated from apple trees and could increase lipolysis in 3T3-L1 adipocytes. Previous studies have found that phloretin could prevent obesity in mice. In this study, we investigated whether phloretin ameliorates non-alcoholic fatty liver disease (NAFLD) in high-fat diet (HFD)-induced obese mice, and evaluated the regulation of lipid metabolism in hepatocytes. HepG2 cells were treated with 0.5 mM oleic acid to induce lipid accumulation, and then treated with phloretin to evaluate the molecular mechanism of lipogenesis. In another experiment, male C57BL/6 mice were fed normal diet or HFD (60% fat, w/w) for 16 weeks. After the fourth week, mice were treated with or without phloretin by intraperitoneal injection for 12 weeks. Phloretin significantly reduced excessive lipid accumulation and decreased sterol regulatory element-binding protein 1c, blocking the expression of fatty acid synthase in oleic acid-induced HepG2 cells. Phloretin increased Sirt1, and phosphorylation of AMP activated protein kinase to suppress acetyl-CoA carboxylase expression, reducing fatty acid synthesis in hepatocytes. Phloretin also reduced body weight and fat weight compared to untreated HFD-fed mice. Phloretin also reduced liver weight and liver lipid accumulation and improved hepatocyte steatosis in obese mice. In liver tissue from obese mice, phloretin suppressed transcription factors of lipogenesis and fatty acid synthase, and increased lipolysis and fatty acid β-oxidation. Furthermore, phloretin regulated serum leptin, adiponectin, triglyceride, low-density lipoprotein, and free fatty acid levels in obese mice. These findings suggest that phloretin improves hepatic steatosis by regulating lipogenesis and the Sirt-1/AMPK pathway in the liver.
NAFLD is considered to be a complex metabolic syndrome of abnormal liver metabolism. Patients with obesity or type 2 diabetes also often suffer from NAFLD. Epidemiological studies indicate that approximately 75% of overweight and obese patients in developed and developing countries have NAFLD [3]. The initial development of NAFLD is excessive lipid accumulation in the liver, which is known as liver steatosis. A total of 5-20% of patients with steatosis will develop a more severe nonalcoholic steatohepatitis (NASH), which is characterized by liver inflammation, fibrosis, and tissue damage [4]. Patients with NASH who do not receive medical treatment and regulate their unhealthy lifestyle and eating habits will develop irreversible liver fibrosis, cirrhosis, liver failure, and liver cancer [5]. In addition, NAFLD is thought to increase the risk of cardiovascular disease (coronary atherosclerosis) [6]. Therefore, treatment of NAFLD could attenuate the incidence of many chronic diseases.
The pathological mechanism of NAFLD is not fully understood. Hepatic steatosis is caused by increased lipid accumulation in the liver and reduced lipid breakdown [6]. However, excessive fat accumulation will also induce liver lipid toxicity, oxidative stress, and inflammation, leading to hepatocyte damage and death [7]. Therefore, excessive accumulation of oil droplets will interfere with the physiological function of the liver. With excessive intake of a high-calorie diet, via digestion and metabolism, high-energy foods will be converted into triglycerides (TGs) in the liver and adipocytes [3]. In the lipid synthesis pathway, activation of transcription factors CCAAT/enhancer-binding protein (C/EBP) and sterol regulatory element binding protein 1c (SREBP-1c) regulates the expression of fatty acid synthase (FAS) to increase the lipid synthesis reaction [8]. Therefore, the expression of SREBP-1c and FAS can be higher in the liver or visceral adipose tissue of obese individuals than normal-weight persons.
Many studies have pointed out that accelerating lipid breakdown in the liver of an obese individual would decrease lipid accumulation in the liver and improve NAFLD [9]. In the liver, TG decomposition requires lipases, including adipose triglyceride lipase (ATGL) and hormone-sensitive lipase (HSL), which can break down TGs to produce free fatty acids and glycerol [10]. ATGLknockout mice cannot lose significant weight under the conditions of proper exercise and calorie restriction [11]. Obviously, the activation of lipase could assist and improve NAFLD. However, excessive free fatty acids from the breakdown of TGs need to be converted to produce energy by β-oxidation [9]. Therefore, promoting lipolysis can regulate lipid metabolism and improve NAFLD towards normal liver function.
In liver, muscle, and adipose tissue, the AMPK signaling pathway can contribute to maintaining an energy balance and regulate lipid metabolism [12]. Activated AMPK can inhibit the expression of lipid synthesis-related proteins, such as SREBP1, FAS, and CD36, and can also increase expression in the fatty acid β-oxidation pathway and related enzymes (e.g., carnitine palmitoyltransferase 1 [CPT-1] and peroxisome proliferator-activated receptor alpha [PPARα]) to decrease lipid accumulation [13]. Previous studies have shown that AMPK activation can protect liver cells from oxidative damage and inflammation, and can inhibit apoptosis and improve the development of NAFLD [14]. AMPK phosphorylation can also stimulate acetyl-CoA carboxylase (ACC) phosphorylation, reducing lipid biosynthesis [13]. Sirtuin 1 (Sirt1) is an NAD + -dependent deacetylase that can contribute to regulating intracellular NAD + levels for maintaining cellular energy balance [15]. In obese mice, adipocytes and liver cells accumulate excess lipid, which inhibits the activity of Sirt1 and AMPK, inactivating the Sirt1/AMPK pathway and aggravating the development of NAFLD [16]. Resveratrol is a Sirt1 enhancer. The treatment of obese mice with resveratrol could induce the expression of Sirt1 and phosphorylation of AMPK, regulating the molecular pathways of lipid and glucose metabolism in the liver [16,17]. Therefore, stimulating the activation of the Sirt1/AMPK pathway will reduce lipid accumulation in the liver and improve hepatic steatosis in obese and overweight individuals.
Clinically effective drugs for preventing or treating NAFLD are still lacking [18]. Plant extracts or natural products have been investigated extensively for preventing or improving obesity and NAFLD [19,20]. Phloretin is isolated from apple trees [21]. Our previous studies found that phloretin can inhibit lipid accumulation in 3T3-L1 adipocytes via inhibition of adipogenesis-related transcription factors and promotion of AMPK phosphorylation [22]. Phloretin can also reduce inflammatory adipokines and increase adiponectin expression in LPS or LPS/CoCl 2 -stimulated 3T3-L1 adipocytes [23]. Other studies have found that phloretin can prevent weight gain and regulate blood glucose in high-fat diet (HFD)induced obese mice [24]. However, the beneficial effects of phloretin on NAFLD are unclear. In the present study, we investigated whether phloretin regulates lipogenesis in HepG2 hepatocytes in vitro, ameliorates NAFLD, and modulates the molecular mechanism underlying lipid metabolism in HFD-induced obese mice.
Cell culture and induced fatty liver cells
The HepG2 hepatocyte cells were cultured in DMEM medium supplemented with 1% penicillin-streptomycin solution and 10% fetal bovine serum. Phloretin (isolated from apple wood, purity ≥ 99% by HPLC) was purchased from Sigma-Aldrich (St. Louis, MO, USA). Phloretin were dissolved in DMSO, and for all cell experiments the final culture concentration of DMSO was < 1%. For cell viability assays, HepG2 cells were treated with various concentrations of phloretin or oleic acid for 48 h to measure cell viability using MTT solution (Sigma) as described previously [25]. The culture plate was treated with isopropanol to evaluate cell viability using a microtiter plate reader (Multiskan FC, Thermo Fisher Scientific) at an absorbance of 550 nm. To detect lipid accumulation in hepatocytes, HepG2 cells were incubated with 0.5 mM oleic acid to stimulate lipid accumulation for 48 h. Cells were treated with vehicle (0.1% DMSO) or phloretin (3-30 μM) for 24 h to detect the molecular mechanism of lipid metabolism. In other cellular experiments, phloretin-treated HepG2 cells were treated with the AMPK inhibitor compound C (Sigma) to evaluate the molecular expression of lipid metabolism.
Oil Red O staining
HepG2 cells were incubated with 0.5 mM oleic acid for 48 h and then treated with or without phloretin for 24 h. The culture plate was washed and cells fixed with formalin. Hepatocytes were stained with Oil Red O solution to observe the distribution of oil droplets in liver cells as described previously [26]. Finally, hepatocytes were treated with isopropanol to assay lipid accumulation using a microplate reader (Multiskan FC) at an absorbance of 490 nm.
Animals and treatments
Four-week-old male C57BL/6 mice (National Laboratory Animal Center, Taiwan) were randomly divided into four groups of 10. HFD based on research diet TestDiet 58Y1 (Purina TestDiet, St. Louis, MO, USA) containing 23.1% protein, 34.9% fat, and 25.8% carbohydrates, and fat supplied 60% of energy. The normal control group (N) and HFD control group (HFD) were fed a standard chow diet or HFD, respectively, and administered DMSO solution by intraperitoneal injection. The PT10 and PT20 groups were fed a HFD and administered 10 mg/kg and 20 mg/kg phloretin by intraperitoneal injection, respectively. The HFD, PT10, and PT20 groups were fed a HFD for 4 weeks. All mice were treated with 50 μl DMSO or phloretin (dissolved in DMSO) twice a week for 12 weeks (Fig. 2a). Dietary intake was recorded per day and body weight measured weekly. Food intake (the weight of consumed food (g) × calories in the diet) was recorded each day. Animal experiments were approved by the Laboratory Animal Care Committee of Chang Gung University of Science and Technology (IACUC approval number: 2015-019).
Biochemical analysis
Mice were anesthetized with isoflurane and blood collected via the orbital vascular plexus. The blood was centrifuged and the serum collected to detect the free fatty acids and low-density lipoprotein (LDL) using a fatty acid quantitation kit and LDL quantitation kit (Sigma), respectively, according to the manufacturer's protocol as described previously [26]. The serum levels of glutamate oxaloacetate transaminase (GOT), glutamate pyruvate transaminase (GPT), total cholesterol (TC), high-density lipoprotein (HDL), and total TGs were measured using the biochemical analyzer (DRI-CHEM NX500, Fuji, Tokyo, Japan). The day before the end of the experiment, mice fasted for 16 h and were administered glucose by intraperitoneal injection to assay blood glucose levels using the biochemical analyzer (Fuji). Blood insulin was detected using the Insulin EIA Kit (Cayman, Ann Arbor, Michigan, USA) according to the manufacturer's protocol. Liver glycogen was detected using the Glycogen Assay Kit (Cayman) according to the manufacturer's protocol, and the glycogen levels were measured using a microplate reader (Multiskan FC) at an absorbance of 570 nm.
Histological analysis
Liver tissues were removed and embedded in paraffin. All tissues were cut into 6-μm sections and stained using hematoxylin and eosin (HE) solution as described previously [27]. Glycogen expression in the liver tissue was detected by periodic acid-Schiff (PAS) solution. All biopsy specimens were observed under a light microscope (Olympus, Tokyo, Japan) and an NAFLD score evaluated as described previously [28]. Furthermore, epididymal and inguinal adipose tissue were removed, weighed, and fixed in formalin solution. Next, adipose tissues were embedded in paraffin as described previously [26]. In briefly, all adipose tissues were cut into 6-μm sections and stained with HE solution to observe and take a photograph with a light microscope (Olympus). Furthermore, the images of adipose tissue choose five fat cells to calculate the cell area using cellSens Standard software (Olympus).
Real-time PCR
Liver tissues in TRI reagent (Sigma) were homogenized using a homogenizer (FastPrep-24, MP Biomedicals, Santa Ana, CA, USA). Samples were centrifuged and the supernatant collected to extract total RNA. Next, cDNA was synthesized using the cDNA Synthesis Kit (Life Technologies, Carlsbad, CA) as described previously [29]. Using fluorescently labeled SYBR Green treated with DNA sample solutions, we amplified specific gene expression using a spectrofluorometric thermal cycler (iCycler; Bio-Rad Laboratories, Hercules, CA, USA).
Statistical analysis
Statistical analyses were performed by one-way analysis of variance (ANOVA) and a Dunnett post hoc test. All data were investigated in at least three independent experiments, and data are presented as the mean ± SEM. P < 0.05 was considered significant.
Cell viability of phloretin in HepG2 cells
The cytotoxicity of phloretin in HepG2 cells was determined using the MTT assay. Phloretin did not demonstrate significant cytotoxic effects at a concentration ≤ 50 μM, and subsequent experiments used phloretin at 3-30 μM concentrations for all cell experiments (data not shown). We also determined that oleic acid concentrations ≤ 0.5 mM did not significantly affect cell viability in HepG2 cells (data not shown).
Phloretin attenuated lipid accumulation in HepG2 cells
Based on Oil Red O staining, phloretin decreased lipid droplets compared to oleic acid-induced HepG2 cells (Fig. 1a). Using isopropanol to treat hepatocytes confirmed that phloretin significantly attenuated lipid accumulation in oleic acid-treated hepatocytes (Fig. 1b).
Phloretin reduced HFD-induced obesity in mice
We observed the appearance of the mice at the end of the experiments and found that HFD-induced obese mice were larger and fatter than normal mice (Fig. 2b). Interestingly, PT10 and PT20 mice had significant weight loss compared to HFD mice (39.69 ± 0.36 g, P < 0.05 and 37.55 ± 0.40 g, P < 0.01 vs. 43.40 ± 0.60 g, respectively; Fig. 2c, d). For obese mice treated with phloretin for 12 weeks, the weight gain in the PT10 and PT20 groups was significantly less than in the HFD group (PT10: 10.52 ± 0.73 g, P < 0.05; PT20: 8.39 ± 0.50 g, P < 0.01; HFD: 13.80 ± 0.78 g; Fig. 2e). The PT10 and PT20 groups did not have altered food intake compared to the HFD group (Fig. 2f ).
Phloretin reduced the weight of adipose tissue in obese mice
Using biopsy specimens, we found that phloretin significantly reduced the epididymal (Fig. 2g, h) and inguinal (Fig. 2j, k) adipose tissue weight compared to HFD mice. Phloretin also significantly decreased adipocyte size in the epididymal (Fig. 2i) and inguinal (Fig. 2l) adipose tissue compared to HFD mice. HepG2 cells were treated with isopropanol and lipid accumulation measured using the absorbance at OD 490 nm. Effects of phloretin (PT) on lipid metabolism in oleic acid (OA)-induced HepG2 cells. c The expression of transcription factors and FAS and d the fold expression were measured relative to β-actin. e β-oxidation and f the fold expression were measured relative to β-actin. g Lipolysis was detected by Western blot and h the fold expression measured relative to the expression of β-actin. i The Sirt1/AMPK pathway and j fold expression measured relative to β-actin. Data are presented as mean ± SEM; *P < 0.05, **P < 0.01 compared to the OA group. # P < 0.05, ## P < 0.01 compared to the without treated-OA group. Furthermore, k HepG2 cells were treated with 0.5 mM OA for 48 h, followed by 30 μM phloretin with or without an AMPK inhibitor (compound c) for 24 h. l The fold expression was measured relative to β-actin. Three independent experiments were analyzed and the data presented as the mean ± SEM. *P < 0.05, **P < 0.01 compared to the compound c group. # P < 0.05, ## P < 0.01 compared to the OA group Liou et al. Cell Biosci (2020) 10:114 Phloretin attenuated liver steatosis in obese mice In HFD-induced obese mice, we found many fat vacuoles and lipid droplets distributed in the liver tissue. Our experiment found that obese mice treated with phloretin had significantly decreased fat vacuoles and fewer lipid droplets compared to HFD-induced obese mice (Fig. 3a, b). We also found that phloretin reduced the liver weight compared to obese mice (Fig. 3c). However, obese mice treated with phloretin did not decrease the liver to body weight ratio compared to HFD-induced obese mice (Fig. 3d). Furthermore, HFD mice treated with phloretin had significantly decreased NAFLD scores than the HFD group (Fig. 3e). PAS staining demonstrated that phloretin increased the glycogen distribution in liver tissue compared to HFD-induced obese mice (Fig. 3f ). Thus, phloretin significantly recovered the glycogen levels ( Fig. 3g) and reduced the levels of TC and TG (Fig. 3h-i) in the livers of mice with HFD-induced obesity.
Effects of phloretin on serum lipid metabolism
Phloretin significantly reduced the serum levels of GOT and GPT, recovering liver function in mice with HFDinduced obesity (Fig. 6a, b). Phloretin also significantly suppressed serum free fatty acid, TC, LDL, and TG levels and increased the levels of HDL in HFD-induced obese mice ( Fig. 6c-g). We also found that the administration of phloretin significantly inhibited the serum levels of leptin, glucose, and insulin and increased serum adiponectin levels compared to mice with HFD-induced obesity (Fig. 6h-k).
Discussion
Appropriate exercise and adjusted eating habits may improve obesity and NAFLD [4]. In recent years, scholars have pointed out that some pure plant compounds (e.g., resveratrol, curcumin, and maslinic acid) can reduce weight in obese mice and improve NAFLD, mainly by promoting the Sirt1/AMPK signaling pathway [17,25,30]. Our previous study found that phloretin can significantly inhibit the accumulation of oil droplets in differentiated 3T3-L1 adipocytes and significantly inhibit the expression of FAS and adipogenesis-related transcription factors. Phloretin can also improve lipolysis by promoting lipases and phosphorylation of AMPK in 3T3-L1 cells [22]. In recent years, some scholars have found that phloretin can prevent obesity and decrease body weight in HFD-induced mice. However, phloretin did not improve body weight or hepatic lipid accumulation in obese mice [24]. In their treatment model, HFD-induced obese mice were fed for 6 weeks and administered 10 mg/kg phloretin by intraperitoneal injection for only 6 weeks. We speculate that prolonging the phloretin treatment period and increasing the dose of phloretin may have reduced the weight in obese mice. Therefore, we designed an experimental procedure to induce obesity for 16 weeks, and treat mice with 10 mg/kg and 20 mg/kg phloretin twice a week for 12 weeks. The 20 mg/kg of phloretin for 12 weeks effectively reduced the weight of obese mice, as well as the weight and lipid accumulation of epididymal and inguinal adipose tissue. Obesity is an important risk factor for cardiovascular disease and type 2 diabetes [31]. Long-term excessive intake of a refined diet will accelerate the accumulation of excessive energy in the body and lead to excessive weight [32]. Visceral fat tissue plays an important role in supporting and protecting the organs, but excessive accumulation of visceral fat will surround the organs and affect their function [31]. In mice induced by a HFD, visceral and inguinal adipose tissue accumulate large amounts of TGs, which not only increases body weight, but also increases the adipose tissue weight. Previous studies have found that obese mice do not have reduced adipose tissue weight with intraperitoneal injection of 10 mg/kg phloretin for 6 weeks [24]. However, our experiments found that administration of 10 and 20 mg/kg phloretin by intraperitoneal injection for 12 weeks significantly reduces the weight of epididymal and inguinal adipose were measured relative to β-actin. c β-oxidation and d the fold expression were measured relative to β-actin. e Lipolysis was detected by Western blot and f the fold expression measured relative to β-actin. g The Sirt1/AMPK pathway and h the fold expression measured relative to β-actin. Effects of phloretin (PT) on TNF-α -expression. i TNF-α levels in serum from mice and j gene expression in the liver, k epididymal adipose tissue, and l inguinal adipose tissue. Fold-changes in expression were measured relative to β-actin expression levels (internal control). Three independent experiments were analyzed and the data presented as the mean ± SEM; n = 10. *P < 0.05, **P < 0.01 compared to HFD-induced obesity. # P < 0.05, ## P < 0.01 compared to the Normal group tissue in obese mice. Therefore, we conclude that phloretin can not only prevent weight and fat accumulation in obese mice [24], but also improve the adipose tissue weight in obese mice to achieve weight loss. The liver accumulated more TGs, causing liver steatosis and inducing the development of NAFLD. Fatty liver is defined as excessive accumulation of TGs in liver cells, and the fat content in liver tissue exceeds 5%, or fat vacuole content 10% [33]. The NAFLD score index includes blood biochemical values, fat vacuole number, and macrophage infiltration in liver tissue [28]. The NAFLD score for is significantly higher for HFD-induced obese mice than for normal mice. Phloretin could decrease the liver weight of obese mice, but phloretin did not reduce the ratio of liver weight/ body weight compared to HFDinduced obese mice. We thought that the liver of obese mice increases excessive oil droplets, but liver weight was increased by 1.17 fold in obese mice compared with normal mice. The liver weight was increased by 1.13 fold in obese mice compared with 20 mg/kg phloretin group mice. Hence, the ratio of liver weight to body weight did not significantly decrease in phloretin-treated obese mice. Furthermore, liver cells can transport glucose into liver cells and convert glucose into glycogen to store energy. However, liver cells of obese people accumulate excessive lipids and interfere with energy metabolism [11]. Liver cells will use glycogen to convert into glucose to provide energy for liver cells. Therefore, fatty liver cells will have less glycogen distribution than normal liver cells. In this current study demonstrated that phloretin was able to recover the glycogen accumulation in liver tissue that reduced in HDF-induced obese mice. Hence, phloretin could regulate glycogen synthesis and maintained the metabolic function in the liver. We found that the TG content and number of fat vacuoles in the livers of phloretin-treated obese mice were significantly reduced compared to obese mice. Obese mice treated with phloretin also have significantly reduced serum GOT and GPT values; therefore, phloretin can restore liver function in obese mice. We think that our e HSL, f CPT-1, g CPT-2, h PPAR-α, and i Sirt-1 according to real-time PCR. Fold-changes in expression were measured relative to β-actin expression (internal control). Three independent experiments were analyzed and the data presented as the mean ± SEM; n = 10. *P < 0.05, **P < 0.01 compared to HFD-induced obesity. # P < 0.05, ## P < 0.01 compared to the Normal group experimental results confirm that obese mice given phloretin for 12 weeks have significantly reduced NAFLD scores and improved symptoms of NAFLD.
Excessive accumulation of TGs in the liver will cause hepatic steatosis or NAFLD. The activation of transcription factors (including Srebp-1c and C/EBPβ) is important for initiating the expression of genes in the lipid synthesis pathway and activating FAS expression to promote the synthesis of fatty acid chains [34]. Srebp-1c is considered to be the most important transcription factor regulating lipid synthesis [35]. In the current study, we found that lipid accumulation in HepG2 cells and the fatty livers of obese mice can significantly increase Srebp-1c expression. Both phloretin-treated cells and obese mice have significantly reduced Srebp-1c expression and suppressed FAS productions. FAS is an important enzyme for regulating fatty acid chain synthesis and elongation [8]. A previous study found that HepG2 cells transfected with Srebp-1c siRNA and induced with fatty acid did not express Srebp-1c and excessive oil droplets Three independent experiments were analyzed and the data presented as the mean ± SEM; n = 10. *P < 0.05, **P < 0.01 compared to HFD-induced obesity. # P < 0.05, ## P < 0.01 compared to the Normal group [36]. Therefore, we thought that oleic acid-induced HepG2 cells and the hepatocytes of obese mice would accumulate a large number of oil droplets and excessive TC and TGs, which is closely related to the expression of lipid synthesis transcription factors and FAS. We examined the livers of obese mice, and phloretin significantly reduced the expression of C/EBPβ and Srebp-1c, thereby inhibiting the expression of FAS and the synthesis of fatty acid chains. Therefore, phloretin-treated obese mice have reduced levels of TG and TC in the liver and improved liver steatosis. We used Oil Red O staining to confirm that oleic acid-induced HepG2 hepatocytes have more oil droplets and phloretin can reduce the oil droplet distribution in HepG2 cells. Our cell experiments also found that phloretin has the ability to reduce the expression of Srebp-1c, C/EBPβ, and FAS in HepG2 cells induced by oleic acid. Therefore, we thought that phloretin has the ability to block liver lipid synthesis by inhibiting transcription factors involved in lipogenesis and FAS expression in obese mice.
Liver or adipose tissue from obese mice has decreased AMPK activation [12,14]. Sirt-1 regulates AMPK expression and induces AMPK phosphorylation [15]. Resveratrol is considered to be a Sirt1 enhancer, and obese mice treated with resveratrol have improved NAFLD and liver steatosis via promotion of the Sirt1/AMPK pathway [37]. AMPK can be used as a sensor of energy regulation to maintain lipid and sugar metabolism in liver and adipose tissue [13]. Previous studies have confirmed that excessive lipid accumulation in the liver and adipose tissue can inhibit AMPK activity and inhibit AMPK substrate ACC phosphorylation, increasing fatty acid synthesis [38]. Thus, reduced AMPK activity would lead to excessive TG accumulation in the liver, accelerating steatosis and NAFLD. In this study, we found that phloretin can effectively regulate the expression of Sirt1 and phosphorylated AMPK in HepG2 cells induced by oleic acid, and stimulate the phosphorylation of ACC to block FAS expression. An assay of liver protein in obese mice provided the same results as phloretin-treated oleic acid-induced HepG2 cells. In addition, HepG2 cells treated with phloretin and AMPK inhibitors also had restored AMPK phosphorylation and inhibited FAS expression. Therefore, our experimental results confirm that phloretin can reduce the accumulation of lipids in the livers of obese mice by regulating the Sirt1/AMPK pathway.
The excessive lipid accumulation of epididymal and inguinal adipose tissue in an obese individual will interfere with organ functions and induce chronic inflammation and dysfunction [39]. Therefore, increasing the breakdown of excessive TG accumulation will significantly improve liver steatosis and weight loss in obese individuals. TGs can be broken down by ATGL into free fatty acids and diglycerides, and activated HSL can break down diglycerides into free fatty acids and monoglycerides [40,41]. Previous studies have found that phloretin treatment for 6 weeks does not decrease the weight of obese mice [24], but they did not analyze the molecular mechanism of lipogenesis and lipolysis, and we are do not understand the experimental design regarding the expression of lipid metabolism pathways in phloretin-treated obese mice. However, our experiment was designed to administer phloretin for 12 weeks at an increased dose. Interestingly, our experimental results showed that phloretin can significantly regulate the lipid synthesis and lipolysis pathways of the liver. Therefore, phloretin can effectively improve body weight and hepatic lipid accumulation in obese mice. In this study, we also found that phloretin can increase ATGL and phosphorylated HSL expression in oleic acid-induced HepG2 cells. Interestingly, phloretin can also significantly restore ATGL expression when oleic acid-induced HepG2 cells are co-treated with phloretin and AMPK inhibitor (compound C). Therefore, we confirmed that phloretin can increase lipolysis in fatty liver by regulating the phosphorylation of HSL and ATGL and AMPK expression to achieve weight loss and improve lipid accumulation in the fatty livers of obese mice.
Studies have pointed out that the intestinal bacteria of obese people may stimulate inflammation in the liver and adipose tissue through the intestine and circulatory system, and induce more inflamed macrophages to infiltrate the liver and adipose tissue [42,43]. Bacterial endotoxin and excess free fatty acids could also stimulate macrophages to release more TNF-α to induce insulin resistance of hepatocytes and adipocytes [44]. Therefore, excessive free fatty acids produced by weight loss people need to generate energy through fatty acid β-oxidation, reducing the inflammation in the liver or adipose tissues caused by free fatty acids. In fatty acid β-oxidation, longchain fatty acids need to be carried by carnitine to enter the mitochondria [7]. CPT-1 and CPT-2 are important enzymes for liver cells, as they bring free fatty acids from the cytoplasm into the mitochondria [45]. Our results demonstrate that phloretin can significantly increase the expression of CPT-1 and CPT-2 in the livers of obese mice, and phloretin increases the production of CPT-1 and CPT-2 in oleic acid-induced HepG2 cells. Interestingly, animal and cell experiments show that phloretin can also enhance PPAR-α expression in the liver tissues of obese mice and HepG2 cells to increase fatty acid metabolism via the β-oxidation pathway. We also found that phloretin can significantly reduce the levels of serum free fatty acids and TNF-α in obese mice. Furthermore, phloretin reduce the levels of TNF-α in the liver and epididymal adipose tissue, improving inflammation and insulin resistance.
Adipocytes in overweight and obese individuals secrete more leptin to affect the hypothalamus and suppress appetite, reducing the accumulation of excessive energy in the body [46]. Our experiments show that obese mice treated with phloretin have reduced serum leptin and increased serum adiponectin. However, the food intake of obese mice treated with phloretin was not significantly different from that of control obese mice. Interestingly, phloretin can regulate fasting blood glucose and insulin levels in obese mice. Previous studies have demonstrated that increasing the levels of adiponectin can effectively reduce insulin resistance in obesity [47]. Therefore, we think that phloretin may improve blood sugar levels by regulating the levels of leptin and adiponectin, improving insulin resistance in obese mice.
Previous researchers have concluded that phloretin can prevent obesity in mice [24]. In the current study, our experimental conclusion demonstrated that phloretin can reduce the body weight of obese mice and the adipose tissue weight. Phloretin could also regulate lipid metabolism by increasing the Sirt1/AMPK pathway, improving liver steatosis in obese mice. Therefore, we think that phloretin has potential as a natural anti-obesity agent for treating NAFLD. | 2023-01-25T15:18:00.546Z | 2020-09-29T00:00:00.000 | {
"year": 2020,
"sha1": "9ebaa356404044f39f5aa20cb6994c9c4d67acba",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13578-020-00477-1",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "9ebaa356404044f39f5aa20cb6994c9c4d67acba",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
197403031 | pes2o/s2orc | v3-fos-license | Laser-induced incandescence versus photo-acoustics: implications for qualitative soot size diagnostics
Laser-induced incandescence at high-repetition rates can in principle be used to resolve the temporal evolution of soot processes. The intrusive character of this technique, however, requires due care of historical effects associated with multiple exposures of individual soot particles to laser light. On the other hand, repetitive heating and cooling opens up an independent, acoustic detection channel. We illustrate a photo-acoustic soot volume fraction measurement, and show that the comparison to simultaneously recorded laser-induced incandescence provides qualitative information on soot growth. Experiments are performed on a propane-fueled, co-flow stabilized diffusion flame, and signals are collected at varying heights above the burner deck. Results show a clear correlation between the laser-induced incandescence and photo-acoustic signals; small deviations are interpreted as a qualitative indicator for the particle size.
Introduction
Laser-induced incandescence (LII) is one of the few optical techniques suited for in situ studies of soot in combustion. It is a relatively brute force technique, and, contrary to many other optical diagnostics, cannot be considered as non-intrusive. The basic principle of LII is rapid heating of particles by means of a short, intense laser pulse, and recording the increased incandescence. Typically, only a small part of the full spectral range of the incandescence is recorded; this reduces the risk of interference by stray laser light or inadvertent laser-induced fluorescence, but also aggravates the dependence on particle temperature. By integrating the LII signal over time a measure of the soot volume fraction is obtained, whereas the decay of the incandescence over time contains information on the primary particle size.
In principle, LII can be implemented using high-repetition rate excitation and detection, to follow the soot volume fraction evolution over time in non-stationary combustion processes, like they occur in, e.g., internal combustion engines [1][2][3]. The intrusiveness of LII, however, now requires extra caution in the interpretation of the LII signal. When the contents of the probe volume are not refreshed between consecutive excitation laser pulses, the same soot volume will be probed multiple times. Since it is well known that soot particles are considerably modified by the excitation in an LII experiment [4,5], it is unlikely that a single soot particle will respond identically upon multiple exposures [6]. However, excitation at high-repetition rates renders another, independent detection method feasible, viz. acoustic detection, of which only a limited number of reports are published to date [7,8]. Sending high-frequency laser pulse trains through a sooting flame will produce a sound wave that is easily detectable by a microphone, or even by ear. This method of generating a photo-acoustic (PA) signal is illustrated in Fig. 1. In the absence of sublimation, achieved by selecting a sufficiently low fluence, conduction is the major heat loss channel for laser-heated soot [9], and the sound is associated with the repetitive heating and cooling of the gas surrounding the excited soot particles.
Both the increased luminescence intensity and the sound intensity will depend on the local soot volume fraction in the probe volume. This information, however, is presented in different forms. The acoustical signal strength is a measure for the energy lost by the heated soot particles to the surrounding gas, whereas the optical signal is due to light emission in a certain wavelength band. In this paper, we present an experimental comparison of laser-induced incandescence and photo-acoustic measurements on a sooting diffusion flame stabilized by a co-flow of the air. We will argue that the acoustic signal is in fact a more direct representative for the soot volume fraction than the induced incandescence. A direct comparison of the acoustical and optical signals reveals small differences, which can be related to soot particle size. In the following, we will briefly describe the physics behind LII and PA, and the experimental setup. Subsequently, the laser-induced incandescence and photoacoustic soot measurements on a propane diffusion flame at 5 kHz repetition rate are presented and discussed.
Theory
The idea of LII as a tool for soot diagnostics was introduced by Melton [10], who laid the foundation for a computational model that can be used to predict the LII signal. Over the following years, this model has been discussed and refined by various authors [11,12]. We will consider only a limited model here following parts of the treatment by Michelsen et al. [12]. Figure 1 depicts the main heat transfer mechanisms during and after laser heating, being absorption and conduction. These mechanisms will determine the particle temperature evolution, and have distinctive dependencies on particle size [13]: a feature that is utilized when determining volume fractions and particle sizes. A simplified form of the differential equation Melton proposed is expressed by where U int is the rate of change in internal energy of a soot particle, Q abs is the rate of absorption of laser light, Q con is the conduction rate from particle to ambient gas, Q sub is the heat loss rate due to sublimation of carbon from the particle (1) U int =Q abs −Q con −Q sub −Q rad , surface, and Q rad is the rate of energy loss by thermal radiation. All heat fluxes have been taken as positive, and their sign in (1) determines whether it is a loss or a gain term. The radiation term is often neglected in the energy balance, since its contribution to the cooling rate is several orders of magnitude smaller than the conduction losses.
Equation (1) merely states that the rate of change in internal energy is equal to the net amount of heat transfer to the soot particle, a fundamental consequence of the first law of thermodynamics. The rate of change of the internal energy is directly related to the rate of change of the particle temperature via where s and c s are the density and specific heat capacity of the soot particle, respectively. D is the particle diameter and Ṫ is the temperature rise rate. Substitution of (1) into (2) provides an expression for the temperature rise rate: where we have neglected cooling due to thermal radiation. The absorption of laser light, relevant only during an excitation laser pulse, is expressed by where abs is the absorption efficiency of a soot particle and Ḟ is the temporal profile of laser fluence. For particles in the Rayleigh regime ( D ex ≪ 1) [14], the absorption efficiency is computed using Here, E(m) is the absorption function, and ex is the laser excitation wavelength. From (4) and (5), it follows that particles in the Rayleigh regime absorb laser energy at a rate proportional to their volume.
Although radiative heat losses are often neglected in the heat balance of a soot particle, it is the broadband thermal radiation that ultimately forms the LII signal. The spectral radiance of a black body is described by Planck's law: where h is the Planck constant, c is the speed of light in vacuum, is the radiation wavelength and k B is the Boltzmann constant.
Detection of the LII signal with a typical intensified camera is essentially an integration of Plancks law over a certain Fig. 1 Principle of photo-acoustic signal generation and collection. The particle size dependence is indicated for absorption and conduction processes spectral detection band, time interval, and particle size distribution. Yet, soot is not a perfect black body, and thus, the spectral emissivity has to be taken into account. In addition, the spectral sensitivity and solid angle of the detection system limit the collection of signal. The resulting detected LII signal is expressed by where is the wavelength detection band (largely determined by the selection of filters), t is the time interval of detection, ( ) is the quantum efficiency of the detection equipment, is the solid angle of detection, and (D) is the soot particle size distribution. The spectral emissivity ( ) is expressed similarly to the absorption efficiency as Substitution of (8) into (7), and assuming a narrow size distribution (D) , in turn yields the proportionality: where N p is the particle number density. The proportionality in (9) holds as long as the particle temperature is independent of its diameter. This is readily achieved using prompt, short detection intervals (i.e., ideally detect only during the laser pulse) and applying a sufficiently low laser fluence, such that the particle temperature remains well below 4000 K, and hence, cooling due to excessive sublimation is prevented [15].
To circumvent issues from shot-to-shot variations in laser intensity, Melton nonetheless applied a high fluence allowing sublimation to reduce the influence of these variations. Despite that this eased the interpretation of LII images, it did introduce difficulties for inferring soot volume fraction information. As the particles are exposed to a lot of laser energy, sublimation becomes increasingly influential in the heat balance, inducing significant temperature variations between particles of different sizes. Notably, it is the increasing cooling rate of smaller particles that endows the LII signal with a bias towards larger particle sizes [13]. Melton analyzed this feature and arrived at an alternate proportionality expressed by where x is dependent on the choice of detection wavelength, and was determined to be Here, c is the center wavelength (in nm) of the detection band. Hence, detection towards the red part of the spectrum can effectively reduce the bias in proportionality of the collected signal and soot volume fraction. Note that this deviation from an exact volume dependence arises from the detection step in LII. The photo-acoustic signal, on the other hand, is proportional to the pressure rise associated with the temperature rise of the ambient gas. Without any dependence on the wavelength of radiative emission, the acoustical signal exhibits a more direct relation with soot volume fraction. As long as conduction is the major heat loss channel for laser-heated particles, all heat input essentially ends up in the ambient gas. From (4) and (5), we can, therefore, expect the PA signal to be strictly proportional to the soot volume fraction.
Of course, there are some caveats to consider. First, microphones detect pressure variations, the amplitude of which depend on the cooling rate. Under atmospheric conditions, however, the characteristic cool-down time (the rate at which the particles dump the energy absorbed from the laser beam into the ambient gas) is always much smaller (in the order of 1 µs [16]) than the sound period. The sound wave frequency, of course, is equal to the laser pulse repetition rate (5 kHz in this case). The acoustic excitation can, therefore, be considered as instantaneous, independent of particle size. Second, the LII bias towards larger particles can be reduced using large detection wavelengths. This has several drawbacks. The experiment becomes more sensitive to background radiation by soot outside of the probe volume, intensified cameras are increasingly less sensitive, and the risk of fluorescence interference by mainly C 2 (Swan bands) and polycyclic aromatic hydrocarbons increases. The latter can in turn be largely prevented using an Nd:YAG at its fundamental wavelength, as was done in this work. Moreover, in this particular case, there is information in the bias when compared to the photo-acoustic signal, so in fact, we use blue filtering to increase the bias. Following the approach of Mueller and Martin [17], the relation between LII signal and particle temperature for the detection system used in this work has been assessed, and was found to exhibit an approximate T 14 dependence. It is thus expected that small variations in temperature between particles of different size classes result in considerable differences in radiation yield. Finally, it is not entirely clear what the effect of sublimation will be. Some of the absorbed energy is then lost on breaking bonds, and will contribute neither to the optical nor to the acoustical signal. Most of the experiments reported in this paper are performed at low fluence, so as to avoid sublimation.
Experiment
A schematic overview of the experimental setup is shown in Fig. 2. The laser was synchronized to the camera using a Stanford DG535 delay generator. The delay was set to exactly the period of the camera frame rate, resulting in a laser repetition rate that is half the frame rate of the camera. Synchronization of camera and laser system was further fine-tuned for prompt detection using a fast photodiode (Thorlabs DET10A/M). The 5/10 kHz approach allows to alternatingly collect LII images and background images, and hence enables individual correction for background luminosity for each laser shot. Each background luminosity image is simply subtracted from the preceding raw LII image, leaving only the laser-induced signal. A power meter was used to measure the average power output of the laser at regular intervals during a measurement series, using a flip mirror. Note that laser power information is thus not available for each individual shot.
Laser system
A diode-pumped Nd:YAG laser which is capable of repetition rates up to 10 kHz (Edgewave IS8II) is employed in the experiments. To reduce the influence of crosstalk from laserinduced fluorescence, the doubling crystal was removed from the laser, and it was used at its fundamental wavelength (1064 nm). A repetition rate of 5 kHz is selected for all measurements. With a full power capability of 70 W at the applied repetition rate, maximum energy amounts to 14 mJ per pulse. The width and height of the rectangular beam equal 8 and 3 mm, respectively, yielding a laser fluence of 0.058 J∕cm 2 at full power. Fluence was adjusted by controlling the diode current, which affects the laser pulsewidth. Temporal laser profiles were measured using a fast photodiode at various power settings, and the results are shown in Fig. 3, where it can be seen that the pulse width amounts to about 8 ns at full power. For simultaneous collection of LII and PA signals, no additional optics were used, and the laser was operated at full power. In some of the experiments treated in Sect. 4, however, the laser beam was focussed to increase fluence further. This will be stated where appropriate. The spatial beam profile is specified as top-hat along its width, while the opposite direction is showing a Gaussian intensity distribution.
Detection equipment
An 8-bit CMOS camera (Lambert HiCam 5000) fitted with a 50 mm Nikon AF Nikkor f/1.4 objective lens were used for LII imaging. The camera has a built-in second generation S20 intensifier that is capable of gating times down to 40 ns. Prompt detection (i.e., recording is started at the arrival of the laser pulse) with a 40 ns intensifier gate was applied in all measurements. Most of the background luminosity is rejected by a 450 nm short wave pass filter. The maximum frame size of the camera is 512 × 512 pixels, but this setting only allows for a maximum frame rate of 5 kHz. Reducing the height of the frame provides the possibility of increasing the frame rate. A frame size of 512 × 256 pixels was used, running at a frame rate of 10 kHz.
For collection of photo-acoustic signals, a Knowles EK23033 electret microphone was used, which was wired to a signal preamplifier and connected to a LeCroy Wave-Runner 44MXi-S sampling oscilloscope. The microphone converts pressure differences to a voltage. The preamplifier in turn increases signal strength by a factor of 50. To be able to subtract background noise from the laser-induced signal, a reference measurement is taken without laser for Only the intensity at 5 kHz originating from the laser-induced sound wave is used for further data analysis. Interpolation of the FFT spectrum is done to obtain the peak intensity at 5 kHz. The frequency response of the microphone is, therefore, not of importance, as the relative sound intensity is not biased by frequency-dependent differences in sensitivity. Each measurement point is run three times, and for each run, an average signal from 100 samples is taken from the scope. A similar procedure is applied to the LII measurements, where an ensemble average is taken over 100 consecutively collected images. LII and PA signals are recorded simultaneously at each location, so that the probe volume for both measurements is always exactly the same. Figure 4 shows the FFT spectrum of a typical acoustical measurement. A distinct peak is seen at 5 kHz, corresponding to the laser-induced sound. A second peak is seen around 3.5 kHz, which was identified to be caused by the co-flow of air through the burner. Additional background signal is seen over the whole spectrum, although weak in intensity. A small peak at 10 kHz is attributed to the second harmonic of the fundamental 5 kHz excitation frequency. Obviously, the pulsed excitation process will not result in a purely sinusoidal sound wave, which explains the presence of higher harmonics.
Diffusion flames
Initial exploratory experiments were performed on a simple candle flame, as shown in Fig. 5a. These experiments served to identify potential multiple exposure effects when performing high-speed measurements on sooting diffusion flames.
Thereafter, simultaneous LII and PA signals were collected on a more sophisticated diffusion flame burner (Fig. 5b), the flame of which is stabilized by a co-flow of the air. A schematic overview of the burner, including geometrical data, is shown in Fig. 6. Propane at high pressure is fed to the burner via a pressure regulator, and the flame height is manually controlled at 65 mm above the burner deck by adjusting the admitted fuel flow. Pressurized air at eight bar is administered to the co-flow channel. Flow homogenization is established by passing the air through several grids and porous materials before exiting at the burner deck. A glass cylinder shields the flame from ambient disturbances. When operated at full power, the laser used in this work is able to considerably modify the appearance of a flame, as evidenced in Fig. 5. These images are made using a consumer DSLR (Nikon D7000) camera; details of exposure are shown in the images. Both recordings are taken under an approximate 45 • angle with the laser beam, although at different orientations. In Fig. 5a, the laser beam passes the camera from the side, whereas in Fig. 5b, the beam travels towards the camera. Soot particles hit by a focused laser beam are essentially blown to pieces, giving rise to a strong decrease in flame luminosity, also in the region above the probe volume. Increased luminescence is observed only in illuminated regions of the flame, where laser intensity is low enough so as to only heat the soot, rather than destroy it. Interestingly, the strong temperature dependence of the soot luminosity amplifies small irregularities in the laser beam intensity profile, notably visible in Fig. 5b. Because these spatial beam non-uniformities are shown here to have a significant impact on the increased incandescence yield, they can easily dominate the structure observed in Fig. 5b. The combination of a sooting flame and DSLR camera can quickly provide some qualitative insight in the spatial laser beam profile, without the need for an expensive beam profiler.
Multiple exposure effects
During a measurement, the laser probes a dynamic equilibrium in the flame. Fresh soot particles continuously are supplied at the bottom of the probe volume. Subsequently, they are hit several times by laser pulses, while they traverse the probe volume, eventually leaving it at the top. Thus, the recorded, quasi-steady signal is made up of contributions by both fresh and aged soot particles. To obtain a representative measurement, we need to know the effect of aging.
The convection speed of burning gases in a candle flame was determined by means of soot vaporization velocimetry, essentially introduced by Seitzmann et al. [18]. For this experiment, the position of the camera was changed, indicated by the shaded pictogram in Fig. 2, and the laser beam was strongly focused using a spherical lens. The idea is already illustrated in Fig. 5a. At high power, the laser burns away the soot that it hits, leaving a soot-free channel downstream. By suddenly switching off the laser, this channel is filled again with luminous soot particles by convection. This process can be followed by high-speed imaging, and the convection velocity can be estimated from the filling rate; see Fig. 7, where high-speed recordings of flame luminescence are depicted after the laser is shut down. Images are post-processed with a color map showing high and low intensities in red and blue, respectively. The height was determined by imaging a grid at the position of the flame. For the candle flame at ambient conditions, we find a speed of 0.68 ± 0.016 m/s. Although this convection speed is not directly applicable to the co-flow burner, we argue that, due to the pressurized gas flow of the burner, its velocity is presumably higher. As a consequence, multiple exposure effects will be less significant.
Exploratory experiments on the candle flame were continued to assess the impact of multiple exposure effects. With a probe volume height of 3 mm and a laser repetition rate of 5 kHz, the convection speed found above implies that soot particles in the candle flame experience about 22 laser pulses while traversing the probe volume. Figure 8 shows how the quasi-equilibrium situation is reached. The laser is continuously pumped by diodes, but initially, the Q-switch is disabled by a trigger inhibit function on the delay generator. At t = 0 , the Q-switch is Fig. 7 Image sequence used for soot vaporization velocimetry. A focused laser beam is send through the candle flame, destroying all soot that it hits, and upon laser shutdown the returning natural luminescence is tracked suddenly activated, and the LII signal (integrated over the probe volume) is recorded for the subsequent individual pulses. This experiment is performed for various fluences at a constant full power setting of the laser (fluence is increased by focusing the laser beam). The results are compared in Fig. 8, normalized to the LII signal induced by the 10th laser shot. After only about 5-6 laser pulses, the system is seen to have reached a quasi-steady state. At low fluence, this state is reached following an initial rise of the LII signal, whereas at high fluence, it is the culmination of a decaying trend. Our interpretation of the trends in Fig. 8 is based on a balance between soot sublimation and ambient gas heating. At low fluence, the soot is heated relatively modestly, but otherwise essentially unmodified by the laser. After each laser pulse it cools down, thereby heating the ambient gas. The next laser pulse, therefore, finds the soot at a slightly higher initial temperature, and is thus able to also heat the soot to a slightly higher final temperature, resulting in increased luminescence. This continues until a new equilibrium has been reached [19]. At high laser fluence, the aforementioned heating also occurs, but now, the soot particles are (at least partly) destroyed by the laser, which reduces the luminescence yield. For the subsequent measurements on the co-flow burner, a fluence of 0.058 J∕cm 2 was selected. This corresponds to running the laser at full power with no additional optics affecting the laser beam. Although the fluence of individual pulses is thereby set at 0.058 J∕cm 2 , the effective fluence is expected to be higher, because local gas heating plays a role in the final temperature that particles reach. Figure 9 shows the fluence dependence of the PA signal. A linear relationship appears to exist between low fluences and PA signal, corroborated by the linear fit to the measurement data. This is expected behavior, since the intensity of the photo-acoustic signal depends on the increase in sensible heat (i.e., internal energy) of the soot particles, assumed that all laser-induced heat is transferred from particle to gas. The increase in internal energy in turn is directly proportional to laser fluence, as can be seen from (4) and (5).
Relative soot growth
Interestingly, at a fixed fluence, it turns out that the LII and PA signals, recorded simultaneously, do not behave the same in all regions of the flame. Figure 10 both are normalized to their (peak) value at HAB = 40 mm. An obvious correlation is observed between the two signals, but a systematic deviation remains. The origin of this deviation can at least partially be ascribed to the 40 ns gate time, which is relatively long compared to the 8 ns laser pulse (FWHM). For this reason, conduction dominates the energy balance of the soot particles for 80% of the detection time, and temperature differences between particles of varying sizes arise. As previously discussed, the bias of proportionality between LII signal and soot volume fraction is caused by temperature variations for particles of different sizes in the probe volume. The PA signal, on the other hand, does relate proportionally to the soot volume fraction, as it is a direct measure of the amount of absorbed energy. Thus, the difference in observed signals can be attributed to the unequal particle size dependence of the PA and the LII signals. Indeed, soot particles are expected to vary in size as a function of HAB, and the systematic deviation between the PA and LII signals can be used to extract more information about this variation. To illustrate this, the data from Fig. 10 are plotted against each other (rather than as a function of HAB) in Fig. 11. From the analysis of Sect. 2, it follows that the normalized signals can be written as Particle number density and particle size at the reference HAB are N 0 and D 0 , respectively. Of course, there will be a size distribution rather than a single one, but that is irrelevant for the argument. Thus, we can expect a relation between the normalized signals given by According to Meltons expressions from (10) and (11), the constant x has a value of approximately 0.35 for the detection range used in the current measurements. Yet, Melton derived this expression from high fluence measurements, and it might, therefore, not be fully applicable here. In the absence of excessive sublimation, the value of x is expected to be lower than 0.35, for conduction is the only mechanism resulting in particle temperature differences during LII detection. In addition, as previously stated, we apply blue filtering to increase the bias as much as possible. Equation (14) implies that the LII signal will be lower than the PA signal when the particles are smaller than the reference particle size, and vice versa when the particles are larger. Figure 11 illustrates the normalized LII signal as a function of the normalized PA signal, for the same measurement, as presented in Fig. 10. Each dot corresponds to a specific HAB, and the black solid line indicates the trend that would be seen if the normalized LII and PA signals were equal. As both signals are normalized to 40 mm HAB, their values are necessarily equal for that measurement point. The results show that the LII signal is lower than the PA signal until the normalization point is reached. Thereafter, the LII signal surpasses the PA signal. According to (14), this is a result of the primary particle size increasing along the measurement range in the flame. It must be noted that the last 10 mm in the tip of the flame have not been measured. These points were omitted due to flame instabilities in the tip of the flame when high-repetition rate laser heating was applied, rendering it impossible to collect useful signals. It is likely that the particles will eventually decrease in size in this part of the flame due to oxidation, but definitive conclusions cannot be drawn. Still, it is indeed possible to derive qualitative information on soot growth from differences in the LII and PA signals. As compared to LII, the PA detection channel is cheap and flexible, and less sensitive to the cooling process of laser-heated soot, at least for the low fluence applied here. This provides a more direct measure for the soot volume fraction. A clear downside of the PA detection method is the lack of spatial resolution.
Conclusion
Photo-acoustic detection of laser-heated soot is shown to be a suitable technique for soot volume fraction measurement. As with LII, an independent calibration is required to obtain quantitative results. A photo-acoustic measurement is simpler and cheaper than LII (no camera needed), but the result is integrated over the whole illuminated volume, whereas planar LII provides additional spatial information. Photoacoustic measurement, in combination with time-integrated LII, can be used to obtain qualitative information on the soot particle size. Further investigation is needed to explore the particle sizing capability of the described method. As previously mentioned, the deviation from an exact volume dependence of the incandescence signal originates solely from the detection procedure. More specifically, the selection of optical filters and camera gating time is expected to influence the observed difference between the PA and LII signals as a function of HAB. The LII signal becomes less sensitive to particle size changes when the detection band is shifted to the red part of the spectrum. The camera gating time is also thought to affect the bias of the LII signal towards larger particles, as the share of conduction in the time-integrated signal increases when longer gating times are applied.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-07-17T22:53:33.981Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "a617f9ccfcda99b47d185e7f8e6d9beaf0e3a8ff",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00340-019-7248-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2f986a23849ba96d9203bad7dc276666bd84d507",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
218808860 | pes2o/s2orc | v3-fos-license | Rhetoric or Reform? Changing Health and Social Care in Wales
Throughout the United Kingdom, the National Health Service (NHS) struggles to meet demand and achieve performance targets. Services need to work with individuals and communities to reduce avoidable disease and dependence. All four UK nations have separately realised the need for change but 20 years’ experience suggests that vision and rhetoric are not enough. Success requires reformed systems and changed leadership behaviour to enable frontline staff to break the status quo. Top down, target driven behaviour must be replaced with a real focus on improvement, championing those who have the knowledge to deliver it.
Need for Change: A UK-Wide Problem
It is often forgotten that since 1999, in a post-devolution United Kingdom, there is no longer a single monolithic National Health Service (NHS) but four divergent models of health and social care. How much can the individual countries learn from each other's experience? "Change or collapse, " the Nuffield Trust's review in July 2019 of proposed healthcare reforms in Northern Ireland, concludes that a centralised approach coupled with a political vacuum created by suspension of the Northern Ireland Assembly, has led to stagnation and lack of real change. 1 The report points to learning for other UK nations especially as England moves to greater centralisation. In the case of NHS Wales, it too struggles with pressures to perform and simultaneously reform but, with one party (Labour) continuously in power for 20 years, slow progress cannot be attributed to political vacuum. Reviewing healthcare in Wales, the Organisation for Economic Co-operation and Development (OECD) advocated for a 'stronger central guiding hand to play a prescriptive role. ' 2 However, given Nuffield's comments about a highly centralised system in Northern Ireland, it is important that message is interpreted carefully. Like Northern Ireland, Wales has very centralised governance arrangements for the NHS. The Chief Executive of NHS Wales is also Welsh Government's Director General of the Health and Social Service Department. In England these two roles are separate, and in Scotland, while the roles are combined, there is a distinction between government and service. In both cases, this helps ensure clearer accountability. As the Northern Ireland experience demonstrates, a highly directive centralised system will continue to deliver the status quo. A Welsh Parliamentary Review 3 appeared to implicitly support this view in its call for urgent change with new, locally devised, integrated service models.
Welsh Health Minister Vaughan Gething rightly acknowledged the achievement of NHS staff last winter when the February 2019 performance data were published. 4 Their success in coping with record numbers of hospital attendances was a huge tribute to front-line services. As in Northern Ireland, the challenge is how to continue to deliver services while building different models for future care. However, service pressures continue at record levels. There were 3% more visits to Welsh A&E departments in 2019 than the previous year and £50 million additional investment was allocated to meet waiting times targets (0.6% uplift of total Health and Social Care spend). 5 In 2016 Welsh Government commissioned the Parliamentary Review of Health and Social Care. 3 The headline conclusion was 'the current pattern of health and social care provision is not fit for the future. ' The review presented a 'case for change' demanding 'a new approach to maintain and improve the quality of health. '
Deja Vu All Over Again?
The Nuffield Trust Northern Ireland report pointed out, 'repeated independent reviews described the need to radically transform the system if it was to be sustainable and fit for the future (but) action and detailed plans failed to materialise, reinforcing a sense of scepticism. ' In reading this, one is tempted to say 'for Northern Ireland read Wales. ' Since 1999 there have been many independent reviews aimed at reforming public services in Wales. [6][7][8][9] The similarities between their recommendations suggest that progress has been inadequate.
Indeed, the Welsh Government's response to the Parliamentary Review, "A Healthier Wales" published in June 2018, 10 which sought to shift hospital-based care and treatment towards primary and community-based health and social care, well-being and prevention, bears a great likeness to the recommendations of the Wanless report on Wales in 2003. The proportion of care delivered in hospital versus primary care should reduce, not keep increasing. Since 2003, counter to the strategic intent, investment in hospitals has continued to rise while primary care budgets have, at best, stagnated. 11 That is in spite of investment in primary care clusters. Meanwhile, the service fails to live within its means, misses performance targets and has seen three major service quality failures. [12][13][14] NHS Wales has comprehensive and complex systems of accountability, managed directly by the Welsh Government. The delivery framework contains almost 100 measures supported by regular returns, close monitoring and an escalation system when performance slips. Yet performance is slipping (at the end of June 2019). In the last 3 years, five of the seven health boards have been placed in some form of escalation by Welsh Government. One has been in Special Measures since June 2015, the longest of any health body in the United Kingdom. At a service level, there were 6 individual services in Wales at level 4, the most serious escalation; 10 at level 3; 2 at level 2 and 1 at level 1. 15 If every system is perfectly designed to get the results it gets, 16 is it time to improve that system? The case for radical re-engineering is mounting quickly.
Lessons From Scotland
The 2017 Nuffield Trust report, "Learning from Scotland's NHS, " lists key lessons for the NHS across the United Kingdom. 17 This suggests that Scotland has benefited from a consistent, strategic approach to delivering health and social care 'with a clear, long-term uncontested agenda on quality' which both Labour and Scottish National Party (SNP) governments have signed up to. While few commentators would say that all things in Scotland are good, the Scottish health and social care system has 'benefited from a continuous focus on quality improvement … engaging the altruistic professional motivations of frontline staff to do better and building their skills to improve. Success is defined based on specific measurements of safety and effectiveness that make sense.' Another key area where Scotland has made progress is in the drive towards integrating the health and social care systems. The Scottish Government has used legislation to create 31 statutory Integration Authorities across Scotland, bringing the NHS and local authorities together to deliver integrated health and social care and budgets. 18 Nuffield cite these two attributes -consistent improvement focus and a legislative framework to enable cross-sectoral working -as transferrable lessons.
Integration, Performance and Service Quality in Wales
In Wales, both the Wanless report (2003) and the Parliamentary review (2018) saw service integration as paramount. A plan for primary care published in 2009 ("Setting the direction" 19 ) has resulted in some valuable changes to the engagement of primary care but overall progress has been very different to that in Scotland. In addition to the 9 health boards and trusts and 22 local authorities, Welsh Government legislation and policy have created a very complex and confusing series of partnerships with 7 regional partnership boards, 21 public service boards, 4 regional education consortia and 4 economic partnerships, amongst others, all having different and overlapping geographic footprints. Sixty-four primary care clusters are intended to develop locally appropriate services. An inquiry reporting in 2017 20 found very limited evidence of reduced pressure on general practitioners or secondary care. Clusters had too little autonomy while good practice examples relied on key enthusiastic individuals.
There is also considerable complexity at an all-Wales level with several organisations operating across Wales, all with different governance structures and varying lines of accountability. The latest response by Welsh Government, apparently based on the OECD report and parliamentary review, proposes a 'strong centre' streamlining current functions through a new Welsh Executive Board in the form of a special health authority. With no powers transferring to the new executive body, and its relationship with health boards unclear, the dual role of the NHS Wales Chief Executive/ Director General will remain. It is difficult to see how this will not add to, rather than reduce, the current confusion.
The patchy progress towards integration of health and social care in Wales is then reflected in some of the NHS's service performance challenges. Performance against waiting times metrics has often been poorer in Wales than in England. 21 The reasons are complex 22 but there is nonetheless a general picture of services straining at their limit while patients' needs are not fully met.
The most recent clinical failure, with unacceptable levels of clinical incidents in maternity services within the then Cwm Taf University Health Board, 14 has prompted manyincluding the health minister -to call for a culture change. 23 The Royal College of Obstetricians and Gynaecologists report found that staff were inadequately supported in their efforts to deliver safe services, and the Health Board's own leadership admitted publicly that "toxic" working practices had developed and "fundamental cultural and behavioural issues have not changed. " This occurred in an organisation which was not under any sanction for poor performance and which was in financial balance, indeed it was widely held up as an exemplar with regulators giving it a clean bill of health. While some would argue that a lack of marketisation in NHS Wales, and the very visible accountability it brings, encourages poor performance, like Edwards 22 we believe that these arguments are simplistic. Wales is not England. We believe the necessary change is about appropriate system leadership not politics or more top-down centralisation. The most important challenge set for NHS Wales by the Parliamentary Review is to change its response to peoples' and communities' needs. To do that, the government will need to reduce its dependence on centralised performance measurement, targets and delivery, approaches that are known to cause dysfunctional consequences including ossification and reduced morale. 24,25 The Need for System Leadership It is the task of leadership to create a single productive learning culture. Macdonald et al argue that this is achieved through behaviour, systems, and symbols. 26 The performance framework in NHS Wales has become central to the existing culture. It establishes the behaviour, systems and symbols between government and boards, directors and services, managers and clinicians. The dominance of financial balance and targets trumps the business of delivering integrated health and social care. Direction comes from the top and delivery is expected to come from the bottom. Knowledge and inventiveness of local teams solving local problems is subordinate to central control. The conditions which Currie and Spyridonidis 27 describe for effective spread of innovation are not encouraged in this climate.
Health and social care is many times more complex and less predictable than many areas of industry. The need to respect and strengthen system knowledge among those who deliver services is at least as strong. The benefits of such a shift must be at least as great. The job of government must be to facilitate and enable healthcare to deliver excellence in a complex and demanding context.
Anyone for Rugby?
It can be difficult to change behaviour that has become counterproductive: in this case the dominance of "performance" over service quality. In a metaphor that resonates in Wales, Mant 28 described this distortion of priorities in terms of rugby. The game was invented to ensure that all school pupils should be kept occupied and out of mischief: heavier pupils up front and the fleet of foot at the back. A game dominated by forward play allows no time or space for the backs to show their skills. Macho behaviour in the scrum, often associated with group think, does not guarantee a game is won. In the corporate world, it may lead to severe failures. For real success, forwards should provide a platform for backs to excel. Victory comes from cooperation across teams. Organisations which become dominated by "forward play" need to consciously reengineer.
Currie and Spyridonidis 27 show us who are the healthcare equivalents to the forwards and backs. Financial and performance frameworks must provide the platform for effective clinical services and engaged clinicians, not dominate them. Without change, the service risks continuing to fail to live within its performance and finance constraints while risking the consequences of disenfranchised and unsupported clinical staff.
Change the System to Change the Results
As the various independent reports have recommended, Wales, like Northern Ireland, needs to re-engineer its approach to managing health and social care. Unlike Northern Ireland, failure to deliver necessary strategic change cannot be put at the door of a political vacuum. Neither, given Nuffield's favourable comments about Scotland, 17 is failure to change inevitable. What is urgently required is a long-term strategic shift from a culture, behaviour, systems and symbols that preserve the status quo to the creation of a system that genuinely champions, enables and empowers those who deliver care, incentivizing a culture of continuous improvement. Wales needs to reform its systems to achieve different results.
Ethical issues
Not applicable. | 2020-04-30T08:37:32.110Z | 2020-04-14T00:00:00.000 | {
"year": 2020,
"sha1": "7d25c1988ef52de1969cc410161a1c1bada1d9de",
"oa_license": "CCBY",
"oa_url": "https://www.ijhpm.com/article_3790_32eba28946eff0197c07705a2e9e6220.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "626f28fab9bf7931fcbc6e4c283b07c32e18dd60",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine",
"Political Science"
]
} |
16652782 | pes2o/s2orc | v3-fos-license | Investigation of FOXM1 as a Potential New Target for Melanoma
Recent studies have shown that immunotherapies and molecular targeted therapies are effective for advanced melanoma. Non-antigen-specific immunotherapies such as immunocheckpoint blockades have been shown to be effective in the treatment of advanced melanoma. However, the response rates remain low. To improve their efficacy, they should be combined with antigen-specific immunotherapy. Elevated expression of the transcription factor, Forkhead box M1 (FOXM1), has been reported in various human cancers, and it has been shown to have potential as a target for immunotherapy. The purpose of this study was to investigate the FOXM1 expression in human melanoma samples and cell lines, to evaluate the relationship between the FOXM1 expression and the clinical features of melanoma patients and to investigate the association between the FOXM1 and MAPK and PI3K/AKT pathways in melanoma cell lines. We conducted the quantitative reverse transcription PCR (qRT-PCR) and Western blotting analyses of melanoma cell lines, and investigated melanoma and nevus tissue samples by qRT-PCR and immunohistochemistry. We performed MEK siRNA and PI3K/AKT inhibitor studies and FOXM1 siRNA studies in melanoma cell lines. We found that FOXM1 was expressed in all of the melanoma cell lines, and was expressed in 49% of primary melanomas, 67% of metastatic melanomas and 10% of nevi by performing immunohistochemical staining. Metastatic melanoma samples exhibited significantly higher mRNA levels of FOXM1 (p = 0.004). Primary melanomas thicker than 2 mm were also more likely to express FOXM1. Patients whose primary melanoma expressed FOXM1 had a significantly poorer overall survival compared to patients without FOXM1 expression (p = 0.024). Downregulation of FOXM1 by siRNA significantly inhibited the proliferation of melanoma cells, and blockade of the MAPK and PI3K/AKT pathways decreased the FOXM1 expression in melanoma cell lines. In conclusion, FOXM1 is considered to be a new therapeutic target for melanoma.
Introduction
Malignant melanoma is one of the most aggressive skin cancers, and its incidence has been gradually increasing [1]. Malignant melanoma is responsible for most skin cancer-related deaths. Recent studies have shown that immunotherapies and molecular targeted therapies are effective for advanced melanoma. Ipilimumab (a fully human monoclonal antibody against cytotoxic T-lymphocyte antigen 4) has demonstrated consistent activity against advanced melanoma [2]. The survival curve began to plateau around the third year of treatment; the threeyear survival rate was 20% [3]. Nivolumab (anti-programmed death 1 antibodies) was associated with objective responses in 30-40% of patients with metastatic melanoma [4]. The combination of nivolumab and ipilimumab resulted in an objective response that ranged from 50 to 60% [4]. On the other hand, vemurafenib (a BRAFV600E kinase inhibitor), has remarkable antitumor activity in patients with BRAFV600E-mutated melanoma. The median progressionfree survival that was observed with the combination of BRAF and MEK inhibition is similar to that recently reported with combined nivolumab and ipilimumab (11.7 months in patients with a BRAF mutation) [4]. However, the effects of the inhibitor therapies are limited by the onset of drug resistance, which occurs within a period several months. Thus, it can be said that the therapies for advanced melanoma have improved greatly. However, there is some area for improvement in both types of therapy. We are of the opinion that antigen-specific immunotherapy should be used together with immunocheckpoint blockades. We have focused our attention on Forkhead box M1 (FOXM1) as a target for anti-cancer immunotherapy in melanoma.
FOXM1 is a member of a family of transcription factors that regulate the expression of genes essential for cell proliferation and transformation and are implicated in tumorigenesis and tumor progression. FOXM1 is a key cell cycle regulator of both the transition from the G1 phase to the S phase and the progression to mitosis [5,6]. FOXM1 accumulates mainly in the cytoplasm at the late G1 and S phases, and nuclear translocation of the protein occurs before entry of the cells into the G2-M phase following cyclin E-CDK2 and Raf-MEK-ERK-mediated phosphorylation [7]. Furthermore, it has been shown that the loss of FOXM1 expression in cancer cell lines results in mitotic spindle defects, delays in mitosis and the induction of mitotic catastrophe [6]. Thus, FOXM1 is essential for cancer cell growth and survival. Additionally, tumor cells overexpressing FOXM1 are resistant to apoptosis and the premature senescence induced by oxidative stress, which has strong implications for resistance to chemotherapy [8].
The abnormal upregulation of FOXM1 is involved in the oncogenesis of various human cancers, including breast, lung, bile duct, prostate, brain and pancreatic cancers, in addition to basal cell carcinoma (BCC) and head and neck squamous cell carcinoma (SCC) [9][10][11][12][13][14][15]. Yokomine et al. reported that FOXM1 is overexpressed in various cancers based on a cDNA microarray analysis, and they revealed that FOXM1-derived peptides binding to HLA-A2 had the capacity to induce CTLs [9]. The authors also analyzed normal tissues, and showed that FOXM1 is expressed only in the testes, thymus, small intestine and colon in normal adult humans [9]. The ideal targets for anti-cancer immunotherapy should have two main characteristics: 1) the antigens should be overexpressed in cancer tissues, but not in normal tissues because high expression in normal tissues will result in the autoimmune responses; 2) the antigens should have an essential function in cancer cell growth and survival. In immunotherapy, antigen loss is an important problem that remains to be solved. Even if the cancer cells lose those kinds of the antigen to escape from the immune system, then cell growth and survival will be inhibited. FOXM1 is therefore considered to be a suitable target for anti-cancer immunotherapy. A multiple peptide cocktail vaccine (KOC1, FOXM1 and KIF20A) has already been tested in patients with refractory pediatric sarcoma in a phase I study. We hypothesize that the combination of immunocheckpoint blockades and antigen specific immunotherapy, utilizing FOXM1, will make immunocheckpoint blockades more effective. Additionally, a FOXM1 inhibitor, a thiazole antibiotic, siomycin A, was reported to induce apoptosis in metastatic melanoma cell lines that correlated with the downregulation of FoxM1 [16]. Therefore, FOXM1 could be a target for not only immunotherapy, but also molecular targeted therapy. Several studies have previously reported that FOXM1 is overexpressed in human melanoma cell lines [16][17][18], however, to the best of our knowledge there have been no previous studies in human melanoma tissue samples. In this study, we aimed to determine expression of FOXM1 in human melanoma samples and to evaluate the relationship between the FOXM1 expression and the clinical features of melanoma patients.
The MAPK and PI3K/AKT pathways represent the most frequently mutated signaling pathways in human cancers, including malignant melanoma. It has been reported that up to 70% of melanomas carry the BRAFV600E mutation [19], and 70% have elevated AKT phosphorylation [20]. The high prevalence of dysregulation of these two pathways provides a rationale for the development of target-based therapeutics for treatment. FOXM1 has been shown to have cross-talk with the MAPK pathway in malignant melanoma [17,18]. The cross-talk of the AKT pathway with the FOXM1 pathway has been demonstrated [21,22]. AKT can control FOXM1 expression in osteosarcoma [21], and the downregulation of AKT by siRNA has been shown to inhibit FOXM1 expression, whereas the overexpression of AKT has been shown to increase FOXM1 expression in prostate cancer [15]; however, its role in melanoma cells has not been reported. To our knowledge, this is the first report that has investigated the association between the FOXM1 and PI3K/AKT pathways in melanoma cells.
Clinical assessment and patient characteristics
Tissue samples of melanomas and nevi were obtained during routine diagnostic procedures. A total of 20 benign nevi were obtained from 20 patients (10 males and 10 females), whose ages ranged from one to 86 years (mean: 44 years). Histologically, the samples included junctional, compound and intradermal variants. A total of 43 primary cutaneous melanomas were obtained from 43 patients (21 males and 22 females), whose ages ranged from 36 to 93 years (mean: 68 years). The primary cutaneous melanomas had Clark's levels ranging from I to V, and Breslow depths ranging from in situ to 53 mm. A total of 13 metastatic melanomas were obtained from nine patients (three males and six females), whose ages ranged from 52 to 84 years (mean: 71 years). Four metastatic melanomas were localized to the regional lymph nodes, while the others were obtained from skin metastases. We gathered patient information from the medical records to determine the clinical stage according to the American Joint Committee on Cancer (AJCC) Cancer Staging Manual, 7th edition staging system for melanoma of the skin [23].
Primary melanomas are classified into four clinical and pathological subtypes: lentigo maligna melanoma (LMM), superficial spreading melanoma (SSM), nodular melanoma (NM) and acral lentiginous melanoma (ALM). Institutional review board (Faculty of Life Sciences Kumamoto University clinical research and medical technology ethics board) approved this study. The written informed consent was obtained from the patients before they were enrolled in this study. And also the written informed consent was obtained from the guardians on behalf of the minors/children enrolled in our study. The study was performed in accordance with the Declaration of Helsinki.
Cell lines and culture conditions
Human melanoma cell lines were maintained in DMEM medium supplemented with 20% fetal bovine serum (FBS) in a 5% CO 2 atmosphere at 37°C. The pancreatic cancer cell line, PANC1, was maintained in DMEM medium supplemented with 10% FBS in a 5% CO 2 atmosphere at 37°C. Primary normal human epidermal melanocytes (NHEM) in CSF-4HM-500D culture medium supplemented with human melanocyte growth supplements were maintained in a 5% CO 2 atmosphere at 37°C. The human melanoma cell lines were kindly provided by the Cell Resource Center for Biomedical Research Institute of Development, Aging and Cancer, Tohoku University (Sendai, Japan), Dr. Y. Kawakami (Keio University; Tokyo, Japan) and the ATCC (Manassas, VA, USA). The NHEM were purchased from Promo Cell (Heidelberg, Germany) and the ATCC (Manassas, VA, USA).
Reverse transcription-PCR
Total RNA was extracted from the tissues and cell lines using the RNeasy kit (Macherey-Nagel, Düren, Germany) according to the manufacturer's instructions, and purified RNA was then reverse transcribed into cDNA using the PrimeScript RT reagent Kit 1 (TaKaRa, Shiga, Japan), as described in the manufacturer's protocol. Equal aliquots of cDNA were used for quantitative RT-PCR (qRT-PCR) employing the SYBR 1 Premix Ex Taq ™ II (TaKaRa, Shiga, Japan), according to the manufacturer's protocol. The qRT-PCR primers used for FOXM1 and MEK1 were purchased from Takara (Shiga, Japan). The GAPDH primers were purchased from QIAGEN (Tokyo, Japan). The cDNA samples obtained from the cell lines were used as templates for semi-qRT-PCR under the following cycling conditions: 40 cycles of denaturation for five seconds at 95°C, annealing for 10 seconds at 58°C and extension for 20 seconds at 72°C. The PCR products were separated via electrophoresis on 2% agarose gels, stained with ethidium bromide and visualized with the Gel Documentation System. The semi-qRT-PCR primer sequences used in the study were: 5'-CACCCCAGTGCCAACCGCTACTTG-3' and 5'-AAA GAGGAGCTAT-CCCCTCCTCAG-3', which can detect three splicing variants: FOXM1a, FOXM1b and FOXM1c [9].
MicroRNA extraction and quantitative real-time polymerase chain reaction
Total RNA was extracted from cell lines using the RNeasy kit (Macherey-Nagel, Düren, Germany) according to the manufacturer's instructions. The cDNA was synthesized from total RNA using a Mir-X miRNA First Strand Synthesis kit (Takara Bio Inc.). For quantitative PCR, the primers for miR-370 were designed on the basis of the information provided in the miR-Base (http://www.mirbase.org): GCCTGCTGGGGTGGAACCTGGT. Primers and templates were mixed with SYBR Advantage qPCR Premix (Takara Bio Inc.), and cDNAs were amplified for 40 cycles of denaturation for 5 s at 95°C and annealing for 20 s at 60°C.
Immunohistochemical analysis
Immunohistochemical analyses were performed as described previously [9]. Sections of paraffin-embedded melanomas and nevus tissue samples were stained with a monoclonal mouse anti-FOXM1 antibody (clone 3A9; Abnova, Taipei, Taiwan), monoclonal mouse anti-BRAFV600E antibody (clone VE1; Spring Bioscience, Pleasanton, CA) and monoclonal rabbit anti-phospho-AKT (Ser473) antibody (Cell Signaling Technologies, Tokyo, Japan). An isotype monoclonal mouse antibody (clone MG2a-53; abcam, Tokyo, Japan) and an isotype monoclonal rabbit antibody (Cell Signaling Technologies, Tokyo, Japan) were used as negative controls. The slides were mounted using aqueous medium and viewed under a microscope. The intensity of staining was classified as (-) (the same or weaker than the adjacent epidermis) or (+) (stronger than the adjacent epidermis). The samples were divided into two groups (positive or negative for FOXM1) according to the results of immunostaining, and the positive rate of FOXM1 was determined. Stained sections were scored according to the percentage of stained melanoma cells: 75-100%, 50-74%, 25-49%, 1-24% or negative. The samples were evaluated independently by two observers (A.M. and S.F.) in a blinded manner.
Gene silencing using small interfering RNA (siRNA) FOXM1-specific siRNA was purchased from SIGMA-ALDRICH (MO, USA), MEK1-specific siRNA was purchased from Cell Signaling Technologies (Tokyo, Japan) and scrambled control siRNA was purchased from Thermo Scientific Dharmacon (Kanagawa, Japan). Human malignant melanoma cell lines were transfected using the Lipofectamine RNAiMAX transfection reagent (Invitrogen Corporation, Carlsbad, CA).
Cell proliferation assays
We performed the BrdU cell proliferation assay to confirm whether the downregulation of FOXM1 by transfection of FOXM1 siRNA could inhibit melanoma cell proliferation. CycLex Cellular BrdU ELISA Kit was purchased from CycLex. (Nagano, Japan). Melanoma cell lines were transfected with FOXM1-specific siRNA and scrambled control siRNA using the Lipofectamine RNAiMAX transfection reagent. After a 72-h incubation, we performed the BrdU cell proliferation assay by means of CycLex Cellular BrdU ELISA Kit. The experiments were conducted four times.
MAPK and PI3K/AKT signaling pathway blockade in melanoma cells
To block the MAPK signaling pathway, melanoma cell lines were transfected with MEK1 siRNA (Cell Signaling Technology, Tokyo, Japan), using the Lipofectamine RNAiMAX transfection reagent. The scrambled control siRNA served as a control. To block the AKT signaling pathway, the PI3K inhibitor, LY294002 (Calbiochem, La Jolla, CA), and an AKT inhibitor (Calbiochem, La Jolla, CA) were added directly to the culture medium of the melanoma cells. DMSO served as a control.
Statistical analysis
The statistical analyses were carried out using Mann-Whitney's U-test, the Kruskal-Wallis test, a 2x2 contingency table and the log-rank test. A p-value < 0.05 was considered to be statistically significant.
Expression levels of FOXM1 in the melanoma cell lines
We first performed a qRT-PCR analysis of the FOXM1 expression in 13 malignant melanoma cell lines, NHEMs and a human pancreatic cancer cell line, PANC1, as a positive control. PANC1 cells were previously reported to have a high expression of FOXM1 [13]. As shown in Fig 1A, all of the malignant melanoma cell lines and NHEMs exhibited comparable expression levels of FOXM1 to the PANC1 cells. Furthermore, we added a semi-qRT-PCR analysis of FOXM1 to examine the expression of FOXM1 isoforms in the melanoma cell lines and NHEMs. The FOXM1 gene contains 10 exons, two of which (Va and VIIa) are alternatively expressed, giving a rise to three differentially expressed mRNA isoforms: FOXM1a, FOXM1b and FOXM1c [9,14]. FOXM1a contains both alternative exons, FOXM1b contains none of the alternative exons and FOXM1c contains only exon Va [25]. Due to the absence of VIIa, which is the inhibitory sequence, FOXM1b and FOXM1c exhibit transactivating activity. The presence of VIIa in FOXM1a makes FOXM1a transcriptionally inactive [25]. It has been reported that the expression of the FOXM1b isoform is increased in BCCs and SCCs [14]. As shown in Fig 1D, we found that FOXM1c is the primary FOXM1 isoform in human melanoma cell lines and NHEMs.
We also performed a Western blot analysis of the melanoma cell lines and NHEMs, and found that the expression levels of FOXM varied from one cell line to another. All of the NHEMs showed a high level of FOXM1 mRNA (Fig 1A), however the protein levels of FOXM1 were downregulated in one NHEM (NHEM③) (Fig 1B). It can be said that some melanocytes express high protein levels of FOXM1 and some express low levels in vitro. Additionally, we investigated the expression levels of microRNAs in NHEMs and melanoma cell lines. MicroRNAs are a family of small noncoding RNAs that are important negative regulators of posttranscriptional gene expression, which eventually promote the degradation or translation suppression of target mRNAs. It has been reported that FOXM1 is a direct target of hsa-miR-370, and it was shown that the mRNA level of miR-370 was decreased in gastric cancer samples compared to normal tissue [26]. We observed that the expression of miR-370 in NHEM③, which expressed low protein levels of FOXM1, was higher than that in the other NHEMs and in all of the melanoma cell lines (Fig 1C). Therefore, one possible mechanism underlying the discrepancy between the mRNA and protein levels of FOXM1 in NHEM3 could be due do the high level of miR-370 in these cells. For the other NHEM and melanoma cell lines, we cannot talk about discrepancy.
Detection of FOXM1 mRNA in the tissue samples
A quantitative RT-PCR analysis was performed in the primary melanoma, metastatic melanoma and nevus samples. As shown in Fig 2, the metastatic melanoma samples exhibited significantly higher expression levels of FOXM1 compared to the primary melanoma samples when the expression was normalized to GAPDH (p = 0.004). There was no statistically significant difference between nevi and metastatic melanoma (p = 0.14).
Immunohistochemical analysis of FOXM1 in tissue samples
An immunohistochemical analysis of the FOXM1 expression in 43 primary cutaneous melanoma, 12 metastatic melanoma and 20 melanocytic nevus tissue specimens was performed. Representative examples are shown in Fig 3A-3H, and negative controls using an isotype monoclonal antibody are presented in Fig 3D and 3E. The melanoma cells positive for FOXM1 displayed homogenous cytoplasmic staining. The results of the immunohistochemical analysis are summarized in Table 1. The samples were divided into two groups (positive or negative for FOXM1) according to the immunostaining results, and the positive rates of FOXM1 detection were described. FOXM1 was highly expressed in the melanoma samples; however, low-level expression was observed in the melanocytic nevus samples. Twenty-one of the 43 primary melanomas (49%) and eight of the 12 metastatic melanomas (67%) were positive for FOXM1, 55% of the positive cases showed 75-100% staining of the melanoma cells, followed by 21% of cases showing 1-24%, 17% of cases showing 25-49% and 7% of cases showing 50-74%. Two of the 20 melanocytic nevi (10%) were positive for FOXM1. The melanoma samples therefore exhibited significantly higher positivity for FOXM1 than the melanocytic nevus samples (p = 0.0009). There were no correlations between positivity for FOXM1 and the histological type of melanoma or the AJCC stage. Additionally, we performed an immunohistochemical analysis of BRAFV600E and phosphorylated-AKT in sections of paraffin-embedded melanomas and nevus tissue samples. Previous studies have shown that FOXM1 is modulated by the Raf/MEK/MAPK pathway and that it may mediate the G2/M regulatory effect of Raf/MEK/MAPK signaling [24]. The cross-talk of FOXM1 with the PI3K/AKT pathway has also been demonstrated [21,22]. AKT can control FOXM1 expression in osteosarcoma [21], and the downregulation of AKT by siRNA has been shown to inhibit FOXM1 expression; whereas the overexpression of AKT increased FOXM1 expression in prostate cancer [15]. We therefore conducted an immunohistochemical analysis to confirm the correlation between the expression levels of FOXM1 and the BRAF mutation status and/or the pAKT status in melanoma tissue specimens. As shown in Table 2, we found that 55.2% of cases positive for FOXM1 were also positive for BRAFV600E, and 27.6% of the cases positive for FOXM1 were also positive for phospho-AKT. These results showed that the expression levels of FOXM1 correlate with the BRAF mutation status and/or the status of AKT phosphorylation in patient samples. We think our results fit with the concept of previous reports.
The correlation between the FOXM1 expression and tumor thickness
The primary malignant melanomas were divided into two groups based on whether they were less than and greater than 2.00 mm in tumor thickness. Patients with stage 0 disease were excluded from this study. There was a significant difference in the FOXM1 positivity between the two groups. The number of patients with ≧ 2.01 mm thickness was significantly higher than that with ≦ 2.00 mm thickness among the FOXM1-positive cases (p = 0.046) ( Table 3).
The correlation between the FOXM1 expression and overall survival Fig 4 shows the overall survival rate for the malignant melanoma patients estimated using the Kaplan-Meier method. We divided the patients into two groups based on the FOXM1 expression in the primary melanoma, as determined by immunohistochemical staining. The patients whose primary melanoma expressed FOXM1 had a significantly poorer overall survival compared to patients without FOXM1 expression (p = 0.024).
The downregulation of the FOXM1 expression by siRNA inhibits cell growth
To determine whether FOXM1 could be an effective therapeutic target for malignant melanoma, the effects of FOXM1-specific siRNA on the proliferation of human malignant melanoma cell lines, MeWo and SK-MEL28, were examined. We performed a quantitative RT-PCR analysis and a Western blotting analysis to confirm the efficacy of FOXM1-specific siRNA. We observed that both FOXM1 mRNA and protein levels were decreased when FOXM1-specific FOXM1, a New Therapeutic Target for Melanoma siRNA was transfected into the melanoma cells (Fig 5A and 5B). And we found that the downregulation of FOXM1 expression significantly inhibited melanoma cell proliferation (Fig 5C).
Blocking the MAPK pathway downregulates FOXM1 expression
Next, we conducted an MEK siRNA analysis to determine whether FOXM1 could be a suitable target for anti-cancer immunotherapy. It is known that the MAPK pathway control the proliferation of cancer cells, including melanoma cells. [18] The MAPK pathways represent the most frequently mutated signaling pathways in melanoma, and the high prevalence of dysregulation of this pathway has provided a rationale for the development of target-based therapeutics. [27] Cross-talk between FOXM1 and the MAPK pathway has also been demonstrated in malignant melanoma. Therefore, we have conducted the MEK siRNA analysis in four melanoma cell lines, MeWo cells, which are wild-type for both BRAF and NRAS, MM-LH cells, which are wild-type for both BRAF and NRAS, SK-MEL28 cells, which had the BRAFV600E mutation and are wild-type for NRAS, and VM115 cells, which had the BRAFV600E mutation and are wild-type for NRAS, to determine whether MEK1 siRNA affect FOXM1 expression. We found that MEK1 siRNA downregulated the expression of FOXM1, together with p-MEK, in three melanoma cells (MeWo cells, MM-LH cells and VM115 cells). (Fig 6) These data suggest the possibility that FOXM1 is activated by the MAPK pathway in these melanoma cell lines. There were no relationships with BRAF status.
The AKT activity is not affected by FOXM1 siRNA in melanoma cells
Recently, the combined use of BRAF and MEK inhibition has become a new standard for inhibiting the MAPK pathway in patients with advanced BRAF mutant melanoma. However, the problem of acquired resistance has become a major stumbling block to obtaining longterm disease control. [28] We therefore considered the blockade of an alternate pathway to be First, to determine whether AKT was activated in melanoma cell lines, the expression of activated AKT was assessed using a Western blotting analysis with a phospho-specific anti-AKT antibody. In some melanoma cell lines, AKT was highly phosphorylated compared with that observed in the other cell lines (Fig 7A). We thus conducted LY294002 (a PI3K inhibitor) and AKT inhibitor studies in two melanoma cell lines in which AKT was highly phosphorylated: MeWo cells, which have a p53 mutation and VM115 cells, which have functional p53. We found that LY294002 and the AKT inhibitor reduced the expression of FOXM1 (Fig 7B), thus suggesting that FOXM1 is regulated by the PI3K/AKT pathway in melanoma cells. However, the downregulation of FOXM1 by siRNA did not affect the expression levels of p-AKT in the melanoma cells (Fig 7C).
Discussion
FOXM1 has been reported to be overexpressed in various cancers and may be a suitable target for immunotherapy [9]. To our knowledge, this is the first study to demonstrate the FOXM1 expression in melanocytic lesions. In this study, we found that the metastatic melanoma samples exhibited significantly higher mRNA expression levels of FOXM1 (p = 0.004). In the immunohistochemical analyses, FOXM1 was overexpressed in 49% of the primary melanomas and 67% of the metastatic melanomas, whereas a markedly lower rate of expression was observed in the benign melanocytic nevi. The nevus tissues expressed some FOXM1 mRNA, however, there were relatively lower levels of FOXM1 protein expression. It also revealed that the FOXM1 expression levels are not absolutely specific to malignant melanoma, and can also be detected in nevi. Previous studies have shown that HLA class I molecules are downregulated in nevus tissue samples [29], therefore, nevi are isolated from the adaptive immune response. We think the expression of FOXM1 in nevi can be considered accepted. Cancer-Testis antigens, like MAGE and NY-ESO, can be considered good therapeutic targets for immunotherapy and there are many clinical trials using Cancer-Testis antigens. It is known that the cancer-testis antigens are expressed not only in the cancer tissues but also in the testis. However, it does not become a major problem because the testis is isolated from the adaptive immune system. The testis is devoid of HLA-class I molecules and cannot present antigens to the T cells [30]. As 49% of the primary malignant melanomas and 67% of the metastatic melanomas evaluated in this study expressed FOXM1, it would be inappropriate for all patients with melanoma to be treated with FOXM1-targeted immunotherapy. In fact, we demonstrated that immunotherapy employing multiple tumor-associated antigens is more effective than that employing single tumor-associated antigens [31]. Furthermore, multiple peptides cocktail vaccine (KOC1, FOXM1 and KIF20A) has already been tested for the patients with refractory pediatric sarcoma in phase I study. In future clinical trials, the use of multiple antigen-targeted immunotherapies, including FOXM1, should be also considered in melanoma. We believe that the combination of immunocheckpoint blockades and antigen specific immunotherapy, utilizing FOXM1, will therefore make immunocheckpoint blockades more effective. Moreover, it has been shown FOXM1, a New Therapeutic Target for Melanoma that higher expression of FOXM1 is associated with a poor prognosis in several cancers [32][33][34]. In melanoma patients, it has been shown that the five-and 10-year survival rates decrease significantly as the tumor thickness increases [23]. Since we found that there was significant correlation between the FOXM1 expression and tumor thickness (p = 0.046). We also found that there was a significant difference in the overall survival between the FOXM1-positive patients and negative patients (p = 0.024). It can be assumed that the FOXM1 expression is correlated, directly or indirectly, with the prognosis of melanoma patients. It is known that the MAPK and PI3K/AKT pathways represent the most frequently mutated signaling pathways in human cancers, including malignant melanoma. It has been reported that up to 70% of melanomas carry the BRAFV600E mutation [19], and 70% have elevated AKT phosphorylation [20]. FOXM1 has been shown to have cross-talk with the MAPK pathway in malignant melanoma [17,18]. Cross-talk between the AKT pathway and the FOXM1 pathway has been demonstrated in osteosarcoma and prostate cancer [21,22]. We conducted an MEK siRNA analysis in four melanoma cell lines, and found that MEK1 siRNA downregulated the expression of FOXM1, together with p-MEK, in two melanoma cells. (Fig 6) We showed that the inactivation of AKT by PI3K and AKT inhibitors decreased the expression levels of FOXM1 in melanoma cell lines. Furthermore, we found that the downregulation of FOXM1 by FOXM1-specific siRNA inhibited the proliferation of melanoma cells in vitro. In patient samples, we found that 55.2% of the cases that were positive for FOXM1 were also positive for BRAFV600E, and 27.6% of the cases that were positive for FOXM1 were also positive for phospho-AKT, meaning that the expression of FOXM1 was correlated with the BRAF mutation status and/or the status of AKT phosphorylation in the patient samples (Table 2).
These data suggest that FOXM1 may be an ideal target for anti-cancer immunotherapy.
Our results have some limitations because of the small number of samples examined by this study. Therefore, further studies with a larger sample number are needed in order to confirm whether FOXM1 is a new therapeutic target for melanoma. | 2016-05-04T20:20:58.661Z | 2015-12-07T00:00:00.000 | {
"year": 2015,
"sha1": "1fc3cf5b70ecb2065b85ce82fe0397ba650cbd8f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0144241&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fc3cf5b70ecb2065b85ce82fe0397ba650cbd8f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
209325873 | pes2o/s2orc | v3-fos-license | The Challenge of Applying and Undertaking Research in Female Sport
In recent years there has been an exponential rise in the professionalism and success of female sports. Practitioners (e.g., sport science professionals) aim to apply evidence-informed approaches to optimise athlete performance and well-being. Evidence-informed practices should be derived from research literature. Given the lack of research on elite female athletes, this is challenging at present. This limits the ability to adopt an evidence-informed approach when working in female sports, and as such, we are likely failing to maximize the performance potential of female athletes. This article discusses the challenges of applying an evidence base derived from male athletes to female athletes. A conceptual framework is presented, which depicts the need to question the current (male) evidence base due to the differences of the “female athlete” and the “female sporting environment,” which pose a number of challenges for practitioners working in the field. Until a comparable applied sport science research evidence base is established in female athletes, evidence-informed approaches will remain a challenge for those working in female sport.
Introduction
In recent years there has been an exponential rise in the professionalism and profile of female sports [1]. Women's professional soccer, rugby, and netball leagues now exist in a number of countries. While still acknowledging the disparity in opportunities, salaries, and media exposure between elite male and female athletes [1], the increased professionalism has afforded female athletes the opportunity to train full time and also access professional sports coaching, sport science, and sports medicine support to help maximize performance potential.
Practitioners (e.g., sport science professionals) aim to apply evidence-informed approaches to accomplish the goal of optimal athlete performance and well-being. Evidence-informed practice is the application of research findings to the real world [2]. The challenges of applying research to practice in sport have been highlighted in the literature (i.e. the challenge of translating science to the context) [2]. This is even more challenging when working with female cohorts. While in recent years more female participants are included within the research literature, these studies typically involve recreational athletes [3], and as such high-performance female athletes are typically underrepresented in the "sports performance" literature. This limits the ability to adopt and apply an evidence-informed approach when working with elite female athletes and as such may mean that we are failing to maximize the performance potential of this cohort.
The purpose of this article is therefore to highlight the challenges of applying evidence developed in a male cohort to a female cohort. Of note, this article will not discuss gender as it is outside the scope of this editorial.
The article will provide considerations, supporting the application of evidence into practice, and also support future translational research for female sport.
Considerations when Applying Sports Performance Research to Female Athletes
Scientific research aims to investigate the effects of independent variables (e.g., age, maturation status, a training intervention) on dependent variables (e.g., sprint performance). In sports science disciplines, sex should be controlled given the different biological attributes [4]. However, sport science practices (e.g., training and recovery protocols, nutritional strategies, injury prevention interventions) in female sport are often underpinned by research conducted in male athletes, given the limited representation of female athletes in the sports performance literature. The underrepresentation is highlighted by a search of "injury" and "rugby" and "female" in the last 10 years of retrieving 196 articles, whereas the same search, replacing "female" for "male," retrieved 602 articles. A similar trend was also observed for "soccer match demands," with 13 and 102 articles retrieved for females and males, respectively (Scopus, 19 July 2019). These corroborate recent findings showing that only 35% of participants are female in studies published in the British Journal of Sports Medicine [5]. The application of evidence derived in male athletes to female athletes is a concern given the known biological differences between the sexes.
Developing an applied sports performance evidence base in female sport is also challenging, given the logistical and methodological context [6]. Fluctuations in hormone concentrations at different stages of the menstrual cycle may influence performance [8]. This is in addition to the different biomechanical profiles of female athletes in comparison to male athletes [6]. These factors may partially account for the lack of efficacy and effectiveness of interventions [7] when applying findings from sports performance research conducted in male athletes. For example, it is known that estrogen concentrations fluctuate throughout the menstrual cycle and estrogen has measurable effects on muscle function and tendon and ligament strength [8]. Estrogen and relaxin concentrations have been reported to peak during the luteal phase of the menstrual cycle, potentially increasing anterior cruciate ligament (ACL) injury risk [9]. Similarly, fluctuations in estrogen and progesterone concentrations during different stages of the menstrual cycle may affect temperature regulation, central nervous system fatigue, substrate metabolism, and overall exercise performance [7]. Therefore, female athletes may require different performance, nutritional, recovery, and injury prevention strategies in comparison to male athletes.
Contextual factors may also influence the effectiveness and application of sports science interventions in practice. Contextual factors include competition structure, finance allocated to tournaments, access to facilities, or access to expert staff, for example. Sports science and medical provision (e.g., strength and conditioning, physiotherapy, team doctor, nutrition) are often limited for female athletes in comparison to males and must be considered when trying to apply research to practice. For example, the success of a training or injury prevention intervention is not solely determined by the efficacy of the intervention, but it is also influenced by multiple interrelated contextual factors within the target group and in the community [10]. Specifically, return to play guidelines in sport (i.e., soccer, rugby) are the same for both sexes, yet female athletes have been reported to have higher concussion rates [11] and present different concussion symptoms [12]. When considering the return to play from injury, contextual challenges, such as access to appropriate qualified support staff (e.g., physiotherapist, sports science support), in addition to the previously identified biological differences, should be considered when supporting female athletes.
Developing and Applying Sports Science Evidence for Female Athletes
Current sports performance and player well-being strategies in female sport are often underpinned by evidence derived from male athletes or male talent development environments. While there are some good practices that can be derived from a male context, in some instances we may be failing to consider the requirements of the female athlete as highlighted above. When aiming to either develop applied sport science practices, adopt an evidence-informed approach, or undertake future research, the first step is to appraise and evaluate the current available evidence. Acknowledging that limited research studies have investigated female athlete cohorts in comparison to male athletes, this may lead to simply identifying the "best available evidence." For the practitioner, this may mean that the evidence is useful to support decision-making or indeed the findings may not be suitable to translate into practice, due to inherent differences (e.g., talent development systems in male youth soccer vs. female youth soccer).
In Fig. 1 (adapted from Hanson et al. [13]), we propose the considerations required when aiming to develop an evidence-based approach to practice in female sport. The figure highlights how it is important that the current evidence base is evaluated against (a) the female athlete and (b) the female sporting environment, in addition to the typical scientific scrutiny applied to published research literature. This can be used to both apply the current evidence into policy and practice and indeed conceive future research projects specific to the needs of the female athletes, which has direct translation into practice.
For example, there is a strong body of research evaluating the match demands of male rugby league [14], but at present limited research exists evaluating the match demands of female rugby league. Following the considerations presented in Fig. 1, by establishing that the female athlete (e.g., rugby league player) is different to the male rugby league player (e.g., male vs. female rugby league players 20-m speed; 3.66 ± 0.26 vs. 3.09 ± 0.12 s [15,16]), it is unlikely that the match demands research from male rugby league players can be applied to female cohorts. Furthermore, rugby league is professional in England and Australia for elite males and amateur and semiprofessional for elite females; thus when considering the female sporting environment and it's context, this further corroborates the conclusion that match demands research form male cohorts have limited application to female cohorts.
Acknowledging that the effective translation of research findings is not solely determined by the efficacy of the intervention [13], there is a clear need to consider the "context" and "environment" of female sport, acknowledging that what occurs in the male game may not be most appropriate for the environment of females. For example, despite the increased professionalism of female sport, factors, such as insufficient training time and lack of resources and equipment in comparison to male athletes, may limit the ability of practitioners to apply such interventionbased evidence to practice. For example, within this context, professional medical staff may not be present at all training session, or qualified sports science/ strength and conditioning practitioners, given the limited funding at present in some female sports.
The application of established research models, considering the female athlete within context, is likely a useful starting point. Bishop [17] provides a framework for undertaking applied research, progressing from "descriptive research" (e.g., what do they do) to "implementation studies in real sporting settings" (e.g., can we improve current practice). Jones et al. [18] also proposed a research model, emphasizing the need to co-construct research questions with policy-makers and practitioners to increase the usefulness and adoption of the research findings into practice. Adopting such approaches to research as described by Bishop [17] and Jones et al. [18] with considerations for the needs of the female athlete and the context of female sport will increase our understanding of the current context (i.e., physical qualities of players, match characteristics, recovery profiles, etc.), which, for a number of reasons discussed above, may be different to that in male athletes within the same sport. These studies are arguably more valuable at present than more advanced scientific studies (e.g., laboratory-based randomized crossover design studies). The challenge for the researcher is that this may be seen by journal editors and academic hierarchies as lacking "originality," given the potential methodological repetition of male research in a female cohort. While this may be true for the advancement of scientific methodologies, it is an essential first step in the research process to understand the context of female sport and the female athlete. Even within male cohorts, a call for research reproducibility has been Fig. 1 Considerations required when developing an evidence-based approach to practice in female sport made [19]; thus the need to replicate studies from male cohorts in female cohorts is required.
Conclusion
In summary, all stakeholders need to be cognizant of sexual dimorphism and the disparity in the current sports science literature and consequent challenges of adopting an evidence-informed approach to practice for female athletes. When applying and undertaking research in female sport, the first step is to appraise and evaluate the current available evidence, with consideration for both the female athlete and the female sporting environment. Considering the athlete and the environment allows the researcher and practitioner to consider potential differences to published literature in male cohorts. Due to the dearth in female-specific sport science literature, in most cases there is a clear need to start with descriptive research to understand the current level of performance within female elite sport. Once this is achieved, the next challenge will be exploring in the influence of female physiology and the contextual factors which may limit the effectiveness of interventions with high efficacy. Only when this disparity in applied sport science research is addressed will the full potential of adopting an evidence-informed approach be possible in female sport. | 2019-12-13T14:22:33.185Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "5d2304eed1a51a47bccc3361726f84ceac58a9af",
"oa_license": "CCBY",
"oa_url": "https://sportsmedicine-open.springeropen.com/track/pdf/10.1186/s40798-019-0224-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6675f33b382d43c7bf81bfd8c28ac5ae3fb199e",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
55803698 | pes2o/s2orc | v3-fos-license | Crop harvest in Denmark and Central Europe contributes to the local load of airborne Alternaria spore concentrations in Copenhagen
This study examines the hypothesis that Danish agricultural areas are the main source of airborne Alt rnaria spores in Copenhagen, Denmark. We suggest that the contribution to the overall load is mainly local or regional, but with intermittent long distance transport (LDT) from more remote agricultural areas. This hypothesis is supported by investigating a 10 yr bi-hourly record of Alternaria spores in the air from Copenhagen. This record shows 232 clinically relevant episodes (daily average spore concentration above 100 m −3) with a distinct daily profile. The data analysis also revealed potential LDT episodes almost every year. A source map and analysis of atmospheric transport suggest that LDT always originates from the main agricultural areas in Central Europe. A dedicated emission study in cereal crops under harvest during 2010 also supports our hypothesis. The emission study showed that although the fields had been treated against fungal infections, harvesting still produced large amounts of airborne fungal spores. It is likely that such harvesting periods can cause clinically relevant levels of fungal spores in the atmosphere. Our findings suggest that crop harvest in Central Europe causes episodes of high airborne Alt rnariaspore concentrations in Copenhagen as well as other urban areas in this region. It is likely that such episodes could be simulated using atmospheric transport models.
Introduction
The importance of understanding the spatial and temporal distribution of fungal spores has recently been highlighted by Lang-Yona et al. (2012) by presenting seasonal variations of airborne fungal spore concentrations in 2009 at a site in Israel, based on quantitative real time polymerase chain reaction (qPCR) analysis (Lang-Yona et al., 2012).Similarly, studies from the same group in Israel suggest that fungal spore concentrations peak during spring and autumn (Burshtein et al., 2011).The authors discuss whether these peaks could be related to spring blooms and autumn decomposition of the vegetation.
Fungal spore concentrations can also be obtained using volumetric spore traps of the Hirst design (Hirst, 1952).The advantage of the Hirst trap is that it provides a daily or bihourly record of fungal spore concentrations that may be used to construct actual calendars of bioaerosols (e.g.Ceter et al., 2012;Melgar et al., 2012;Skjøth and Sommer, 2010).The disadvantage of the Hirst trap is that the associated method for counting of the spores in microscopes only provides observations of fungal spores at the genus level (e.g.Ceter et al., 2012;Skjøth and Sommer, 2010), whereas qPCR can quantify fungal spores at the species level.This disadvantage is outweighed by the long time series with high temporal resolution, often covering several years with bi-hourly records (Oliveira et al., 2009;Stepalska et al., 1999;Stepalska and Wolek, 2009) or even up to 10 yr or more (Aira et al., 2008;Hjelmroos, 1993;Skjøth and Sommer, 2010).Despite these advantages with data from the Hirst trap, data of fungal spores are rare in comparison to pollen data.This is even more pronounced when the numbers of studies on data from Hirst traps are compared to studies on atmospheric trace gasses such as ozone.This lack of scientific attention has been recognized for a number of years, e.g. by an editorial in The Lancet (2008) and by the recommendation in Allergy by Cecchi et al. (2010); as such they both suggested further studies in aerobiology.Studies on bio-aerosols such as airborne fungal spores are therefore highly needed.
The first long term study on fungal spores in the air of Copenhagen by Skjøth and Sommer (2010) showed that the genera Cladosporium and Alternaria both have their maximum concentrations during summer, but that Cladosporium has a much longer season.This suggests that the source of these two important genera of fungal spores can be different.
Fungal spores that are among the most often observed genera are Aspergillus, Pennicillium, Cladosporium and Alternaria (Lang-Yona et al., 2012;Larsen, 1981).The genus Alternaria includes numerous plant pathogens (Gravesen et al., 1994) and Alternaria spores are considered an important part of the total fungal spectrum in, e.g.potato crops (e.g.Escuredo et al., 2011;Iglesias et al., 2007).Alternaria spores of the species Alternaria alternata can also threaten human health (Damato and Spieksma, 1995) and cause allergic symptoms in sensitized individuals when the atmospheric concentrations are high (Gravesen, 1979).The observational methods in the Danish and the European monitoring programme rely on visual identification of pollen and fungal spores.This means that this method cannot identify Alternaria alternata from the genus Alternaria.The threshold of 100 spores m −3 is based on the methods that include both a slit sampler (for growing colonies and subsequent identification) and a Hirst trap for providing atmospheric concentrations (Gravesen, 1979).As such, the threshold of 100 spores m −3 includes both allergenic and non-allergenic fungal spores from the genus Alternaria.The sources of airborne Alternaria spores are considered to be mainly vegetation such as forest and agricultural land (e.g.Stepalska et al., 1999) during the drying and decomposition of aboveground plant tissues (Escuredo et al., 2011;Iglesias et al., 2007).The studies by Skjøth and Sommer (2010) showed that in Denmark the Alternaia season ends in the middle of September, which is about a month before leaf fall in the forests.Similar studies from Poland showed that the peak of the Alternaria spore season is found in July-August, while October-November has a relatively small load of Alternaria spores (Stepalska et al., 1999).This suggests that decomposition of tree leaves does not contribute to the overall Alternaria load in Northern and Central Europe.In the UK, agricultural areas near Cardiff and Derby have been suggested as potential sources of high Alternaria concentrations (Corden et al., 2003), and studies from northern Portugal and Poland have shown that rural areas have a higher load of Alternaria than nearby urban areas (Oliveira et al., 2009).Wheat harvesting has previously been shown to release large numbers of spores into the air (Friesen et al., 2001), exposing harvesters to large amounts of viable fungi (Hill et al., 1984).Vegetation in agricultural areas is therefore a likely main source of Alternaria spores in many parts of Europe.Studies from the USA have shown that spores from agricultural areas that are infected with soybean rust (P.meibomiae and P. pachyrhizi) have the potential to be transported more than 1000 km under favourable weather conditions (Isard et al., 2005(Isard et al., , 2007)).European studies on other aeroallergens have shown that the overall load in a region is typically due to local sources with intermittent long distance transport from remote regions (Skjøth et al., 2009;Smith et al., 2008).Alternaria spores vary in size (10-40 µm × 10-220 µm), have cylindric forms and fall speeds between 0.4 and 4 cm s −1 (McCartney et al., 1993).Alternaria alternata is among the smallest among this group of fungal spores, with an aerodynamic diameter of 19 µm.It is therefore likely that Alternaria spores, including Alternaria alternata, have a similar potential for atmospheric transport as other aeroallergens.This suggests that the temporal and spatial variation of Alternaria spores is mainly dependent on the proximity of local sources and only secondarily dependent on long distance transport (LDT) from areas with a high load of Alternaria.
In this study we hypothesize that Danish agricultural areas are the main source of airborne Alternaria spores in Denmark, and that the contribution to the overall load is mainly local but with intermittent LDT from non-Danish areas with both a high density of agricultural areas and a potentially high load of Alternaria due to harvest.We have adapted a protocol that has been used in several similar European studies on allergenic pollen since 2007 (Hernandez-Ceballos et al., 2011b;Sikoparija et al., 2009;Stach et al., 2007) and use the definitions of local regional and long distance transport as given by Orlanski (1975).Here we investigate our hypothesis by analysing a 10 yr record of bi-hourly Alternaria spore observations from Copenhagen with respect to seasonality, overall daily pattern and potential source areas to LDT, combined with a dedicated field study on potential emission sources in agricultural areas.
Table 1.Seasonal spore index, day of season start, day of season end, and day of maximum spore concentration and its value.Sum of spores during season, sum on low days and sum on high days and number of days above the critical threshold of 100 spores as a daily mean value.The sum of low days and high days correspond to the total accumulated catch during the season (the days that cover 95 % of the entire catch) and not the entire year.
Spore trap data and analysis of episodes
Measurements from Copenhagen obtained within the Danish pollen and spore program (Skjøth and Sommer, 2010;Sommer and Rasmussen, 2009) have been analysed for 2001-2010 with respect to Alternaria spores.In the monitoring programme, Alternaria spores are identified at genus level and counted at 640× magnification on 12 transverse strips for every two hours.The total area of investigation corresponds to 9.75 % of the total sample.This area and the flow rate of the fungal spore trap can be used to convert the spore count into bi-hourly concentrations or daily mean concentrations.The trap was located on the roof of the Dan-ish Meteorological Institute (55 • 43 N, 12 • 34 E) in the centre of Copenhagen at a height of 15 m above sea level.
The near surroundings to the trap in Copenhagen are urban, while nearby areas within a distance of ≥ 30 km are mainly agricultural in both southern Sweden and Denmark, as described by Skjøth et al. (2008).Summaries of the data for the 10 years are organised in two tables in a similar way as Kasperzyk et al. (2011) with respect to the annual spore index (the summation of daily mean values), day with maximum concentration and start of season (Table 1).The annual spore index is dimensionless by convention in aerobiology (Mandrioli et al., 1998), although Buters et al. (2012) recently have argued that the unit of the associated equation must be grains m −3 .The Alternaria seasons were defined using the 95 % method (Goldberg et al., 1988), as this is the standard analytical method in the Danish pollen and spore program (Skjøth and Sommer, 2010;Sommer and Rasmussen, 2009).Each of the 10 yr are therefore investigated during that period, when the accumulated number of fungal spores are between 2.5 % and 97.5 % of the total annual catch.The daily average concentration of 100 Alternaria spores m −3 has been reported as a clinical threshold for allergic symptoms (Gravesen, 1979;Ricci et al., 1995).Therefore, days with daily average concentration above 100 spores m −3 were investigated for the mean diurnal variation (Fig. 1).Alternaria episodes (> 100 spores m −3 ) that showed diurnal patterns that were markedly different from the mean daily cycle (Table 2) were investigated further using back trajectory analysis, in a similar way as in related studies on Betula (Skjøth et al., 2009), Quercus (Hernandez-Ceballos et al., 2011a), Olea (Fernández-Rodríguez et al., 2012;Hernandez-Ceballos et al., 2011b) and Ambrosia artemisiifolia (Fernández-Llamazares et al., 2012;Kasprzyk et al., 2011;Sikoparija et al., 2009).
Field observations and emission estimates
Emission estimates of Alternaria spores during harvest were obtained at four locations around Tune, Roskilde, Denmark.Samples were obtained from wheat and barley fields between 18 August and 16 September 2011.The measurements were taken as grab samples (i.e. a small but representative sample) from the exhaust airstream of the harvesting machine.
The grab samples were allowed to immediately sediment onto glass slides for later microscopic counting of spores.
Visual inspection of the fields revealed that none of the crops displayed signs of fungal infection.The barley fields had been treated against fungal infections by spraying with pyraclostrobin and tebuconazole on 15 June 2011.The wheat fields had been treated against fungal infections by spraying with propiconazole on 18 April 2011, with pyraclostrobin, epoxiconazole and boscalide on 25 May 2011, and again with epoxiconazole and boscalide on 16 June 2011.All applications were according to manufacturers' and agricultural advisors' recommendations, targeting the fungicide resistance spectra of local fungal pathogens.The fungicides used were neither targeted at, nor are claimed active against Alternaria spp., although it cannot be excluded that the fungicides used in the study fields initially had an inhibitory effect on Alternaria spp.Even though Alternaria triticina has been reported to cause fungal infections in wheat in India (Singh et al., 1998) and Argentina (Perelló and Sisterna, 2005), in Europe the economic effect of Alternaria spp. is considered insignificant in barley (Gannibal, 2008) and rare in wheat (Gultyaeva, 2008), making their chemical control unneces-sary.During maturing and senescence of crop plants, prior to harvest the earlier applied systemic fungicides will cease to have an effect, which indeed is a regulatory condition for their use.Therefore, as Alternaria spp.are common to the environment, having a role in the decay of organic matter (Kirk et al., 2008), any Alternaria spores found in this study most likely will reflect the normal course of fungal invasion of grain crops during early summer, occurring in most or all fields of Central and Northern Europe where moist conditions occur intermittently during the weeks prior to harvest.
The harvester was a CASE-IH Agriculture model 7120 Axialflow combine with a type 3050 cutter table (width 915 cm).During sampling of emission estimates, the harvester advanced at ca. 4 km h −1 , with its air throughput being set to ca. 950 (unitless machine value, corresponding to ca. 570 m 3 min −1 ).The straw shredder was running on two sampling dates, on the two others it was turned off.
Samples of emissions were obtained by manually directing the harvester's exhaust air stream through a 155 cm long piece of ventilation pipe (polished steel; inner diameter, 20 cm) and abruptly closing the input end with a padded nylon-covered lid.Immediately after closure, the pipe was positioned upright, with the lid on the upper end.The bottom end was maintained open for 10 s to allow coarse particles to escape.Then the bottom was sealed using a standard ventilation pipe stopper (polished steel, with a rubber seal around the edge).The stopper had a glass slide centred on the flat inner side.The effective sedimentation distance, from the surface of the padded lid to the surface of the glass slide, was 155.5 cm.Samples of emissions were produced by allowing particles to sediment onto the glass slide from the air column inside the pipe for 9 min.After sedimentation, the slide was removed and archived for later microscopic analysis.In several cases, residual control samples were taken by continuing the sedimentation for another 9 min on a fresh glass slide and maintaining the pipe firmly in an upright position.Negative control samples included environmental air from the middle and from the upwind end of each field.Prior to each sampling, the inside of the pipe was cleaned with a stream of clean air.
The glass slides for emission estimates were identical to those slides that are used in the Danish pollen and spore program (Skjøth and Sommer, 2010;Sommer and Rasmussen, 2009).The surfaces of the slides were inspected for Altenaria spores by using the spore counting method for Alternaria in the Danish pollen and spore program (Skjøth and Sommer, 2010).The microscopic counts were then converted to spores per volume of air in the exhaust air of the harvesting machine by using the area that has been investigated on the slide with the microscope (0.00006552 m 2 ), and the sedimentation distance inside the pipe (1.55 m) for the third dimension.The spore concentrations were converted to estimates of Alternaria spores per ha of harvested field by using the width of the cutting table (9 m) and the driving speed (4 km h −1 ) of the harvesting machine, and by assuming that the grab sample was representative for the exhaust air stream (570 m 3 min −1 ) of the harvesting machine.Microscopic counts and calculated fungal spore densities for each sample, along with estimated emission factors for the fields, are presented in Table A1.
Model calculations and potential source map
Agricultural areas under rotation and with mechanical harvesting methods have been identified in the CLC2000 dataset (European Commission, 2005) to consist of the following three land cover types: non-irrigated arable land (code 211), permanently irrigated land (code 212), and pastures (code 231).The land cover data have been extracted for Central and Northern Europe (Fig. 2) and gridded to a tenth of the EMEP50 grid (http://www.emep.int/grid/griddescr.html) using a similar methodology as Skjøth et al. (2010) and Fernández-Rodríguez (2012).The EMEP grid is commonly used for inventories in European air quality studies including the use of the chemistry transport models EMEP (Fagerli and Aas, 2008;Simpson et al., 2012), the EMEP4UK (Vieno et al., 2010) and DEHM (Brandt et al., 2012;Skjøth et al., 2011).This procedure allows easy comparison of density of relevant emission areas throughout the region and analysis in relation to atmospheric transport (e.g.Fernández-Rodríguez et al., 2012).
Back trajectories were computed using the Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model (Draxler et al., 2007).Trajectories were calculated using the GDAS (Global Data Analysis System) meteorological files maintained by ARL, with a temporal resolution of 3 h and a spatial resolution of 1 degree × 1 degree.Air mass trajectories were calculated at Copenhagen during the identified A2) and the density of agricultural areas under rotation -the potential source of Alternaria spores during harvest.
episodes with a receiving height of 500 m, which in general is representative for this kind of aerobiological studies (e.g.Hernandez-Ceballos et al., 2011a).Air mass trajectories were plotted 48 h back in time with 2 h steps between each trajectory, corresponding to the time step of the fungal spore observations -following the method described by Stach et al. (2007) and later used by Skjøth et al. (2008Skjøth et al. ( , 2009)), Smith et al. (2008), Sikoparija et al. (2009), andHernandez-Ceballos et al. (2011a, b) -by using either the ACDEP (Skjøth et al., 2002) in the matrix style (Skjøth et al., 2007) or the HYSPLIT (Draxler et al., 2007) models.Measured precipitation from weather and climate stations have been used as an estimate for potential Alternaria spore release due to harvest in the potential source regions (Fig. 2 and Table A2) by assuming that dry weather and dry fields are required for intense harvesting.Local meteorological observations of wind speeds during the episodes (Table A3) have been obtained from one meteorological mast located approx.1500 m south of the pollen trap (Skjøth et al., 2008), which is operated by Aarhus University for use in the socalled integrated monitoring of air quality in Denmark (Hertel et al., 2007).
Seasonal and daily variations of Alternaria in Copenhagen
The 1), with high concentrations in the late afternoon reaching 378 spores m −3 and a minimum at 112 spores m −3 early in the morning.Sixteen ( 16) of the 232 high days had a very different pattern compared to the typical daily pattern (Table 2).These non-typical daily patterns were identified by both visual inspection of each individual day and correlation analysis of the individual days with the mean pattern.Additionally, an inspection of the day before and the day after the period was also carried out in a similar methodology as given in other studies (Sikoparija et al., 2009;Skjøth et al., 2009).Except for the year 2002, each year had one or more of these 16 non-typical high days.Trajectory calculations show that all of the 16 non-typical high days had air masses arriving from main agricultural areas in southern Scania (Sweden), Denmark, Poland or Germany.The three most outstanding episodes with respect to both load and pattern are discussed in detail in Sect.3.3 using trajectories and the source map (Figs.3-5).
Alternaria emission sources in local agricultural fields
Analysis of the field data revealed between 10 6 and 10 7 Alternaria spores m −3 in the exhaust air of the harvesting combine (Table A1).These have been converted into emission between 1.2×10 10 and 6.7×10 10 Alternaria spores ha −1 during harvest.Residual control slides from a second sedimentation period gave 10-15 % of the initial slide.This suggests that the sedimentation efficiency in the pipe was 85-90 % for Alternaria spores when using 9 min sedimentation time.Negative controls always counted as zero (data not shown).In addition, when the farmer let the machine run idle (the machine not advancing, i.e. no grain being harvested, but the motors running at normal speed), we were unable to find spores in the exhaust of the machine.
Trajectory calculations, potential source map and long distance transport
The inventory of potential sources to Alternaria spores in Central and Northern Europe (Fig. 2) reflects the density of managed agricultural areas that are under rotation.The inventory shows that the potential sources to Alternaria spores are found in many parts of the studied area.The highest densities (70-100 %) are found in western Denmark, central and northern Germany, southern Scania (Sweden) and central Poland.Much lower densities (0-20 %) are found in most of southern Sweden, southern Germany, along the border between Germany and Poland, the southern parts of Poland and the Baltic countries.
In Sects.3.3.1-3.3.3, the three most outstanding episodes with respect to both load and pattern are discussed in detail, using trajectories and the source maps and the overall weather pattern, including measured accumulated precipitation in the potential source region.
Episode 1: 30-31 August 2008
Daily average Alternaria spore concentrations observed on 30 and 31 August 2008 in Copenhagen were 161 and 313 spores m −3 , respectively.Hourly Alternaria spore concentrations were low in the beginning of the period and increased quickly to above 700 spores m −3 late in the evening of the 30th and remained at a level of around 600 spores m −3 until midday the 31st (Fig. 3a).From midday the 31st and until late in the evening, the concentrations gradually decreased below 100 spores m −3 .The weather in the study region had a high pressure ridge extending from Iceland (1029 hPa) over Scandinavia (∼ 1020 hPa) to northern Germany and central Poland (1022-1023 hPa).This caused air masses to be pushed from the north towards Copenhagen.Around midday of the 30th, wind speeds decreased (by investigating the distance between the trajectory points) and the air masses remained for a number of hours over Denmark and Scania, the southernmost province of Sweden, before arriving in Copenhagen.Similar situations with low wind speeds were also present the 31st (Fig. 3b and c).This was also reflected by measured wind speeds down to about 1 m s −1 at nighttime during the episode (Table A3).A few mm of precipitation were recorded on the 29th in the entire region, and the most eastern parts of Denmark and Sweden also recorded precipitation on the 28th and 27th (Table A2).This suggests harvesting possibilities in Denmark and Sweden starting the 30th and good harvesting possibilities on 31 August.
Episode 2: 10-11 August 2010
Daily average Alternaria spore concentrations observed on 10 and 11 August 2010 in Copenhagen were 153 and 130 spores m −3 , respectively.Hourly Alternaria spore concentrations had a number of peaks (200-400 spores m −3 ) from the beginning of the period until midday 11 August (Fig. 4a).After that the concentrations remained low.The weather in the study region was dominated by a high pressure system (1018-1019 hPa) over central Germany and Poland, which during most of the period pushed air masses from the south and southwest towards Copenhagen, passing either Danish or German land areas including water areas (Fig. 4b and c).Measured wind speeds ranged from about 2 m s −1 to more than 6 m s −1 , where the highest wind speeds were observed in the afternoon of the 11th (Table A3).Heavy precipitation was recorded over eastern parts of Denmark and Scania on 9 August (Table A2).The eastern, western and southern parts of Denmark recorded almost no precipitation from 5 to 11 August 2010, suggesting generally good harvesting possibilities in most of Denmark and northern Germany until at least 10 and partly 11 August.and then had two distinct peaks exceeding 1000 and 1400 spores m −3 late in the evening the 15th and during early morning the 16th (Fig. 5a).Hereafter concentrations remained high at between 200 and 400 spores m −3 until late in the evening of the 16th, when concentrations dropped to near zero.The weather in the study region had a high pressure area (∼ 1010-1021 hPa) covering most of Poland, the Baltic countries, Russia and reaching down to the Balkan region.At the same time, minor low pressure centres (1006-1013 hPa) were located over southern Sweden and Germany.This caused air masses from the Eastern and the Baltic states to be pushed towards Denmark in the beginning of the period.These air masses arrived in Copenhagen from the northwest, passing over northern parts of Scania.Around midday of the 15th, winds veered to the south so that the air masses originated from either Germany or Poland.These air masses crossed the Baltic Sea and arrived in Copenhagen either directly from the sea or by crossing the southern parts of Scania in Sweden.Measured wind speeds ranged from about 1 m s −1 to more than 6 m s −1 , where the highest wind speeds occurred on the first half of the 16th (Table A3).Heavy precipitation was recorded over Denmark and Scania the 13th and 15th (Table A2).Medium precipitation was recorded at Wielkopolski 13, 14 and 15 August, while the remaining 6 Polish stations recorded limited or no precipitation.This suggests that during the episode there were limited harvesting possibilities in Denmark/Sweden and good harvesting possibilities in Poland.
Discussion
The measured airborne concentrations of Alternaria spores in Copenhagen show that the majority of the 232 high days have a strong diurnal pattern with a maximum in the late afternoon and a minimum during night or early morning (Fig. 1).If the main source of Alternaria spores were remote sources, then this daily pattern could have been either nonexistent or peaked at any time of the day, as the big plumes of LDT of aeroallergens can arrive in Copenhagen at any time of the day or night (Mahura et al., 2007;Skjøth et al., 2007Skjøth et al., , 2008)).The number of high days are outnumbered by low days (Table 1), but every year the total load during the season has been dominated by the high days, which contributed up to 82 % of the entire Alternaria load during the season (Table 1).Here we have investigated all 232 high days individually.Only 16 (Table 2) of the 232 high days have a diurnal pattern that deviates from the overall pattern (Fig. 2).The potential source map (Fig. 2) shows that Denmark is dominated by land cover types that can be a strong source of Alternaria spores.Additionally, the emission study from a typical land cover with agricultural production in rotation shows that emission during harvest releases a large amount of Alternaria spores, even when the fields have been treated with fungicides.Finally, the small fraction of high days, which show a diurnal pattern that differs from the overall pattern, have been analysed with respect to air mass transport (using HYSPLIT and the reanalysis meteorological dataset).In all cases, the air masses came from more remote areas that are also dominated by land cover types containing potential sources to Alternaria spores.Such episodes were identified almost every year during the study period (Table 2).Additionally, it was shown that even if a region such as eastern Denmark and southern Sweden had obtained very large amounts of rain, making harvest very difficult, more remote regions, e.g.Poland, could have contributed with large amounts of Alternaria spores (Fig. 5).Overall, these studies suggest that the daily load of Alternaria spores is dominated by local or regional sources with intermittent LDT from more remote sources, in e.g.Germany and Poland, and that these LDT episodes can happen almost every year.
Recently, a number of source receptor studies on aeroallergens have been carried out by combining measured concentrations from the Hirst traps with trajectory calculations.These studies have identified both local sources and intermittent long distance transport from regions with high source densities.Common to all of these source-receptor studies is that they focus on pollen such as Betula (Mahura et al., 2007;Skjøth et al., 2007Skjøth et al., , 2008;;Veriankaite et al., 2010), Quercus (Hernandez-Ceballos et al., 2011a) Olea (Fernández-Rodríguez et al., 2012;Hernandez-Ceballos et al., 2011b) and Ambrosia artemisiifolia (Fernández-Llamazares et al., 2012;Kasprzyk et al., 2011;Sikoparija et al., 2009).Our study suggests that the methodology used for allergenic pollen can be extended to fungal spores and that agricultural fields are a potential source of elevated Alternaria spore concentrations.
The Alternaria spore emissions we have measured during harvest may be considered average for state-of-the-art agricultural practice.Fungal disease was not observed in either wheat or barley fields that were harvested.Despite this, fungal spore concentrations were recorded during harvest using grab samples from the exhaust to estimate emissions from the harvesting machine.The grab samples were taken on four different days from different fields containing barley and wheat.These emission estimates from measurements in the Danish fields gave surprisingly uniform results, about 5 × 10 10 spores ha −1 during harvest.With this emission factor, a simple Eulerian box model calculation suggests that if 2 % of the entire surface area in a region is harvested, the threshold of 100 spores m −3 would be exceeded in the harvested region.Here it is assumed that spores are kept airborne the entire day, that spores are well mixed in the atmosphere up to 1000 m, that all fungal spores are kept in the local region and that all harvested areas have an emission factor similar to the Danish areas that were treated with fungicides.Thus, our emission factor could be related to the geographical location or could be a function of agricultural management, such as the use of machinery or application of fungicides.How the real distribution in the atmosphere will be in case the emission factor varies between fields, and when atmospheric transport and deposition is taken into account, are not known.However, it is known that the concentrations in rural areas typically are larger than in nearby urban areas (Kasprzyk and Worek, 2006).This suggests the importance of atmospheric transport from nearby agricultural sources.Furthermore, the low variation in the samples suggests that the experimental method is robust for estimating emission estimates, despite the crude sampling technique, and that it only requires a small number of samples.Surprisingly, fungal spore emissions were not increased after the rain period between 21 August and 10 September 2011, which one could have expected, as wet periods are known as periods of fungal growth.Instead, harvesting over moist soil after the rain period appeared to result in lower spore emissions.Observations similar to ours have been made for other agricultural fungal spores (Friesen et al., 2001) and for pollutants, such as ammonia.Here the local emission depends strongly on both climate as well as agricultural production methods (Gyldenkaerne et al., 2005;Sommer et al., 2003Sommer et al., , 2006)).Our estimated emission of spores during harvest was of the same order of magnitude as in the study by Friesen et al. (2001), but using a much simpler approach.The simplicity of our method may therefore make it applicable to different areas.
The map produced in this study (Fig. 2) suggests that most of Denmark, southern Scania (Sweden), the northern and central parts of Poland and Germany, in contrast to southern Poland, have a high density of potential Alternaria source areas.Previously, the southern parts of Poland have been identified as having a lower Alternaria load compared to central Poland, especially Poznan (Stepalska et al., 1999).This lower load stood in contrast to the longer vegetation period in southern Poland compared to central Poland (Stepalska et al., 1999).The study by Stepalska et al. (1999) therefore indicates that there must be a higher density of sources in central Poland, thus supporting our map.If agricultural areas are the main source of Alternaria spores in Denmark, then it is likely that the fungal spore concentration is higher in western Denmark than eastern Denmark (Copenhagen).In our map of potential source areas, western Denmark has a considerably higher proportion of potential source areas than eastern Denmark.Similar relationships have previously been suggested by Corden et al. (2003), as Corden et al. (2003) found a high Alternaria spore load in Derby with high agricultural production and a low annual spore load at the coastal site in Cardiff, which had very limited cereal production.Also in southern Poland, results from the operational trap in Rzeszow were compared with results from a rural trap 10 km away (Kasprzyk and Worek, 2006).In the year 2001, the load was about the same at the two Polish sites, while the Alternaria load in the rural area was more than double the urban load in 2002 and with a markedly different seasonal pattern.For Denmark, such relationships remain to be investigated using more than one spore trap.Such studies would provide valuable information about western Denmark and can also be used to test the hypothesis in this paper as well as investigating the robustness of the proposed source map.
In a Spanish potato crop treated with fungicides, Alternaria spores were recorded during the entire growth season, but with a peak in Alternaria concentration during leaf senescence (Escuredo et al., 2011;Iglesias et al., 2007).This suggests that although fields are treated against fungal disease (and visual inspection does not reveal Alternaria attack), spores are still present in the field and released in varying quantities throughout the entire season.The Spanish studies also showed that the fungal spore load in the region of Ourense is higher in the field area (Escuredo et al., 2011) compared to the load that is observed in the nearby city area (Aira et al., 2008).This again stresses the importance of atmospheric transport on the local scale.More importantly, other studies by Hill et al. (1984) and Friesen et al. (2001) have observed very high amounts of spore release during harvesting.Similarly, Mitakakis et al. (2001) found periods of Alternaria burst during mowing and harvesting of grass in Australia.Alternaria spp.have been named among the agents of fungal diseases in wheat (alternaria leaf blight A. triticina; black head molds, Alternaria spp.) and barley (kernel blight, Alternaria spp.).Our study focused on a specific harvest situation, and samples were taken from a few wheat and barley fields that had been treated with fungicides.Sampling in infested crops, or crops that have not been treated with fungicides, or harvesting using different methods might therefore yield significantly different emission factors.If emission factors can be obtained from both growing crops and crop harvest (e.g. by using the simple methodology that we employed), then this will provide the much needed emission factors that can be used by atmospheric modellers in order to increase understanding of how Alternaria spores are released and distributed in the atmosphere.
The main characteristics of the spore season show that the annual variations in the spore index from Copenhagen varies by more than a factor of two from less than 5000 to more than 10 000.The annual index of Alternaria spores and the number of days with clinically relevant levels are correlated (r 2 = 0.67, column 2 and 10 in Table 1), which is not surprising as these numbers are highly dependent: the main load of Alternaria spores in Copenhagen is due to episodes of peak days (Table 1).The total uncertainty due to the counting method (Carinanos et al., 2000;Sikoparija et al., 2011;Sterling et al., 1999), i.e. the person that counts, means basic measurement errors and meteorology can easily reach 50 % in total on individual days (Pedersen and Moseholm, 1993).This error requires large differences in the counts in order to be statistically significant.However, much larger samples, such as the samples that produced Fig. 1 (with n = 232), only require a 10 % difference in order to be statistically significant when the formula by Pedersen and Moseholm (1993) is used.This small difference must however also be compared to the rather large variation in the actual dataset as given by the error bars on Fig. 1.The error bars in Fig. 1 and the 10 % difference as requirement for statistical significance therefore suggest that the observed daily variation is both statistically significant and physically relevant.The annual level in Copenhagen is generally higher than the levels that are ob-served on the Spanish part of the Iberian Peninsula (Aira et al., 2008;Rodriguez-Rajo et al., 2005) or in Sweden (Hjelmroos, 1993), where the annual spore index of Alternaria has been reported to be in the range of 1000-4000.Similar levels as in this study were also reported for Warszawa in a Polish study by Stepalska et al. (1999).The same Polish study also showed that the annual loads in Poznan can have an index that exceeds 30 000.Such large loads were also observed by Angulo-Romero et al. (1999) and Maya-Manzano et al. (2012) in Merida in 1997, where the spore index for Alternaria was about 20 000, 25 000 and 50 000, respectively.The studies from Spain also pointed out that large geographic variations exist in the spore count, as another site, Caceres, only had a spore index of about 2000.Here it is worth to note that Caceres is a region in Spain with limited crop production, while Merida is a Spanish region with large amounts of irrigated crops such as maize, tomato and fruit trees (Maya-Manzano et al., 2012).The study by Mayo-Manzano et al. (2012) as well as Stepalska et al. (1999) show that the variation in the load in the same biogeographical region can differ by more than a factor of ten between years and between sites.Such large variations can be difficult to explain and map using volumetric spore traps alone.The seasonal variation found in this study with only one single peak has been found in most European studies such as Poland, Sweden, England and Spain.Bimodal peaks are only found in the Mediterranean region (Angulo-Romero et al., 1999;Cosentino et al., 1995;De Linares et al., 2010;Giner et al., 2001;Lang-Yona et al., 2012;Maya-Manzano et al., 2012).All these studies on annual loads, on seasonal variations as well as our study highlight the interrelated connection between overall weather in the geographical regions as well as the abundance of local sources.Studies that focus on various aspects of source mapping (e.g.observations of load and comparisons between sites, source-receptor studies such as using trajectories or actual mapping of potential sources) are therefore all highly needed for fungal spores.Such studies provide much needed insight into an area that, according to an editorial in The Lancet (2008), has to some degree been forgotten and therefore needs much more scientific attention.
A number of studies have shown similar daily patterns as our study of Alternaria spore concentrations.Stepalska and Wolek (2009) showed that in Krakow the distribution of peak concentrations had a similar pattern as the peak concentrations in this study.In Krakow, peak concentrations are most often observed in the late afternoon, about a factor of three more often than during night and early morning (Stepalska and Wolek, 2009).Similar observations, with a peak in the late afternoon and a minimum in the night or early morning, were found in the north of Portugal (Oliveira et al., 2009;Rodriguez-Rajo et al., 2005), north of Spain (Aira et al., 2008), south of Spain (Angulo-Romero et al., 1999;Giner et al., 2001) and Italy (Ricci et al., 1995).This suggests that at all these sites, including Denmark, the overall load of Alternaria is due to local or regional sources and as well as the overall recommendations given by Cecchi et al. (2010).
In Denmark, Cladosporium and Alternaria dominate the atmospheric fungal spore flora by 68.9 % and 9.4 % of the total fungal spore catch, respectively (Larsen, 1981).High season for fungal spores is June until October, but external meteorological factors affect the fluctuation from day to day and year to year (Larsen, 1981).Our studies suggest that the Alternaria concentrations can be explained by combining source maps with atmospheric transport.Such information can be relevant for both agriculture as well as patients that are sensitized to fungal spores.The number of patients that are sensitive to fungal spores is usually much lower than to pollen (Damato and Spieksma, 1995).A recent study estimates that 2.4 % of the entire population is sensitized to fungal spores (Elholm et al., 2010).However, the same data (Elholm et al., 2010) showed that asthmatics had a significantly higher prevalence of fungal spore sensitisation compared to non-asthmatics: 6.6 % vs. 2.0 % in the two genera, respectively.For sensitisation to Alternaria, the corresponding figures were 6.1 % vs. 1.7 %, and it has also been observed that the clinical reaction to fungal spores is often stronger than the reaction towards pollen (Sigsgaard personal communication).This calls for additional efforts in research, diagnosis and treatment of allergy, which would be a direct response to the editorial in The Lancet (2008), as well as the overall recommendations on aerobiological research as given by Cecchi et al. (2010), such as collection and analysis of aerobiological data on large spatial scales.
Conclusions
The present study supports the hypothesis that Danish agricultural areas are the main source of airborne Alternaria spores in Denmark, meaning that the source of the overall load is mainly local or regional, but with intermittent LDT from more remote agricultural areas.These LDT episodes contributed to a large degree to the total annual load of Alternaria spores.In fact, the high days dominate the overall Alternaria load, although high days are always outnumbered by low days.The hypothesis is supported by the analysed data of the 10 yr bi-hourly record of Alternaria in Copenhagen that shows a distinct daily profile of 232 clinically relevant episodes (Fig. 1) and the identification of potential long distance transport episodes (Table 2) from areas that could be potential source regions (Figs. 3,4 and 5,respectively).The emission studies in cereal crops under harvest also support our hypothesis.The results showed that although the fields had been treated against fungal infections, harvesting still produced large amounts of airborne fungal spores.The findings agree well with related studies that show high Alternaria spore load in agricultural areas in Central Europe.This supports the hypothesis that crop harvest in Central Europe causes episodes of high airborne Alternaria spore concentrations in Copenhagen as well as other urban areas in this region.
Our findings have several implications.Firstly, forecasting of fungal spore quantities relevant to allergy patients in Denmark must take into account long distance transport, and cannot be based on measured concentrations in Denmark alone.Secondly, allergy patients need a warning several days ahead to plan their medical intake.This information is not available for fungal spores, as the Danish information system on fungal spores is very simplistic and is based on information Table A2a.Daily measured precipitation in mm day −1 in the potential source region to the episodes of Alternaria spore concentrations (spores m −3 ) that were measured 30-31 August 2008.The last 7 days of recorded precipitation until the potential episode is used as an indicator of good harvesting possibilities."-" usually means no precipitation was recorded, but could potentially also mean technical problems.from Copenhagen alone (Skjøth and Sommer, 2010).An extension of the spore monitoring programme by using several spore traps would most likely be very useful, as our study suggests that the fungal spore load might be higher in other parts of the country.An alternative is to supplement the current information system with the mathematical model systems from chemical weather forecasting (e.g.Kukkonen et al., 2012) and extend these to include the spore production and emission from countries such as Germany and Poland, as well as the agricultural production in Denmark.This approach might however be very difficult as all relevant Alternaria sources remain to be identified.Furthermore, this as well as other studies suggest that the emission pattern is related to both biology and agricultural production methods.In our study we have identified possible LDT episodes, suggested a gridded inventory of potential source areas, verified potential sources to local emission peaks from harvesting and found the typical daily pattern in the observed load of Alternaria spores.Each of these pieces of information will be very useful in the daily information to the public as well as in forecasting.The episodes that we analysed in detail showed that it is possible to have high days that follow each other and that the change from low to high load of Alternaria is related to both a change in weather and potential source area.Such 3.0 2.8 4.2 3.9 3.9 4.3 1.3 1.5 1.8 1.6 1.1 1.5 26 Jul 2003 2.8 2.8 3.7 5.4 3.2 4.1 4.7 5.2 2.5 2.9 2.6 2.6 5 Sep 2004 1.7 2.0 2.3 2.5 2.7 3.3 3.2 2.9 3.2 2.6 2.7 3.3 25 Aug 2005 3.2 3.7 4.0 4.3 4.9 5.1 5.2 7.8 5.7 4.3 3.0 3.5 10 Aug 2006 1.8 1.2 1.2 2.4 4.2 5.1 4.4 5.2 4.1 2.0 1.3 0.9 25 Aug 2006 1.3 1.2 0.7 1.7 2.2 2.6 2.7 2.0 1.4 1.6 1.3 1.5 26 Aug 2006 1.5 1.7 0.7 1.6 1.4 2.8 2.6 3.1 2.4 1.6 1.4 1.4 5 Aug 2007 2.7 2.2 1.9 1.0 2.3 3.4 3.8 4.0 3.6 3.4 3.1 1.8 11 Aug 2007 3.2 2.0 2.3 1.9 2.6 3.4 2.1 5.2 3.1 2.1 1.0 3.2 31 Aug 2008 2.1 1.1 0.6 0.9 2.5 3.5 3.3 4.1 4.3 4.0 4.6 5.1 22 Jul 2009 1.3 1.7 2.0 2.3 4.4 4.4 2.9 4.0 4.1 4.0 3.3 2.7 27 Jul 2009 2.7 4.0 3.0 3.5 4.4 3.8 4.0 4.4 3.8 2.9 2.3 3.9 6 Aug 2010 1.4 0.8 1.9 3.9 3.6 4.1 2.8 3.7 3.5 3.4 1.8 2.1 11 Aug 2010 4.2 2.6 2.5 3.3 4.2 5.3 6.0 6.4 5.0 3.8 3.0 1.9 16 Aug 2010 5.5 5.7 5.4 5.7 6.0 5.5 5.5 3.4 3.1 2.4 0.9 1.2 patterns can be simulated with atmospheric transport models.Furthermore, development of emission models and inventories makes it possible to use source-based models such as DEHM (Brandt et al., 2012), SILAM (Sofiev et al., 2006) and COSMO-ART (Zink et al., 2012) for improved understanding of aeroallergens and ultimately better information to the public.
Fig. 1 .
Fig. 1.Mean diurnal Alternaria spore concentration for days above 100 spores m −3 , n = 232.The error bar for each mean value corresponds to 1 standard deviation.
Fig. 2 .
Fig. 2. Site map including the location of the spore trap in Copenhagen, the used precipitation stations (TableA2) and the density of agricultural areas under rotation -the potential source of Alternaria spores during harvest.
Fig. 3 .
Fig. 3. (a) Bi-hourly variation in Alternaria spore concentrations (spores m −3 ) obtained in Copenhagen 30 and 31 August 2008.Backtrajectories arriving at the spore trap in Copenhagen: (b) 30 August; (c) 31 August.The distance between two dots on a trajectory corresponds to one hour of atmospheric transport.
Fig. 4. (a) Bi-hourly variation in Alternaria spore concentrations (spores m −3 ) obtained in Copenhagen 10 and 11 August 2010.Backtrajectories arriving at the spore trap in Copenhagen: (b) 10 August; (c) 11 August.The distance between two dots on a trajectory corresponds to one hour of atmospheric transport.
Fig. 5. (a) Bi-hourly variation in Alternaria spore concentrations (spores m −3 ) obtained in Copenhagen 15 and 16 August 2010.Backtrajectories arriving at the spore trap in Copenhagen: (b) 15 August; (c) 16 August.The distance between two dots on a trajectory corresponds to one hour of atmospheric transport.
Daily measured precipitation in mm day −1 in the potential source region to the episodes of Alternaria spore concentrations (spores m −3 ) that were measured 10-11 August 2010.The last 7 days of recorded precipitation until the potential episode is used as an indicator of good harvesting possibilities."-" usually means no precipitation was recorded, but could potentially also mean techni-
Table 2 .
Days with episodes (above 100 spores m −3 ) of fungal spores with a markedly different daily pattern compared to the overall daily pattern of the 232 episodes recorded in Copenhagen during 2001-2010.
annual spore index of Alternaria in Copenhagen varied by more than a factor of two in the spore index from 4488 in 2003 to 10 781 in 2006 (Table1).The mean of the season start was day number 182, while the mean maximum day during the season was number 218, with a standard deviation of 7 and 13 days, respectively.The highest daily concentration was 1016 spores m −3 on 17 August 2001 (the second largest was 853 spores m −3 on the following day, not shown).The highest observed bi-hourly concentration was 2727 spores m −3 (not shown).A total 232 days during the 10 yr period had high concentrations above the clinical threshold of 100 spores m −3 .The contribution from individual years ranged from 17 high days in 2003 (of 70 days in total within the spore season) and 2008 (of 60 days in total) to 31 days in 2009 (of 76 days in total).The contribution of the high days to the total seasonal load varied from about 55 % in 2008 to more than 82 % in 2009.The analysis of bihourly Alternaria spore concentrations at high days shows a typical daily pattern (Fig.
Table A1 .
Observed Alternaria spores in grab samples from the harvesting machine, the calculated concentration of spores m −3 in the exhaust air and the corresponding emission factor from harvested fields.It is not known if LDT is a contributing factor at locations other than Copenhagen.This again calls for dedicated source-receptor studies on fungal spores from sites other than Copenhagen.Such studies can also be considered an answer to both the editorial in The Lancet(2008)
Table A2c .
Daily measured precipitation in mm day −1 in the potential source region to the episodes of Alternaria spore concentrations (spores m −3 ) that were measured 15-16 August 2010.The last 7 days of recorded precipitation until the potential episode is used as an indicator of good harvesting possibilities."-" usually means no precipitation was recorded, but could potentially also mean technical problems.
Table A2d .
Geographical coordinates of precipitation stations obtained from National Centre for Environmental Prediction (NCEP) and Deutscher Wetterdienst (only German stations).
Table A3 .
Observed wind speed (m s −1 ) on the days with episodes (above 100 spores m −3 ) of fungal spores that were selected and presented in Table2. | 2018-12-13T05:42:35.321Z | 2012-11-22T00:00:00.000 | {
"year": 2012,
"sha1": "4b7b28a1c61b070dd65b1559769b96006201854f",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/12/11107/2012/acp-12-11107-2012.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4b7b28a1c61b070dd65b1559769b96006201854f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
268258513 | pes2o/s2orc | v3-fos-license | HOXC9 characterizes a suppressive tumor immune microenvironment and integration with multiple immune biomarkers predicts response to PD-1 blockade plus chemotherapy in lung adenocarcinoma
Background: The quest for dependable biomarkers to predict responses to immune checkpoint inhibitors (ICIs) combined with chemotherapy in advanced non-small cell lung cancer remains unfulfilled. HOXC9, known for its role in oncogenesis and creating a suppressive tumor microenvironment (TME), shows promise in enhancing predictive precision when included as a TME biomarker. This study explores the predictive significance of HOXC9 for ICI plus chemotherapy efficacy in lung adenocarcinoma (LUAD). Methods: Following the bioinformatic findings, assays were performed to ascertain the effects of Hoxc9 on oncogenesis and response to programmed death 1 (PD-1) blockade. Furthermore, a cohort of LUAD patients were prospectively enrolled to receive anti-PD-1 plus chemotherapy. Based on the expression levels, baseline characteristics, and clinical outcomes, the predictive potential of HOXC9, PD-L1, CD4, CD8, CD68, and FOXP3 was integrally analyzed. HOXC9 not only mediated oncogenesis, but also corelated with suppressive TME. CMT167 and LLC cell lines unveiled the impacts of Hoxc9 on proliferation, invasion, and migration. Subsequently, tumor-bearing murine models were established to validate the inverse relationship between Hoxc9 expression and effective CD8+ T cells. Results: Inhibition of Hoxc9 significantly curtailed tumor growth (P<0.05), independent of PD-1 blockade. In patient studies, while individual markers fell short in prognosticating survival, a notable elevation in CD8-positive expression was observed in responders (P=0.042). Yet, the amalgamation of HOXC9 with other markers provided a more distinct differentiation between responders and non-responders. Notably, patients displaying PD-L1+/HOXC9- and CD8+/HOXC9- phenotypes exhibited significantly prolonged progression-free survival. Conclusions: The expression of HOXC9 may serve as a biomarker to amplifying predictive efficacy for ICIs plus chemotherapy, which is also a viable oncogene and therapeutic target for immunotherapy in LUAD.
INTRODUCTION
Lung cancer is the leading cause of cancer death worldwide, with an estimated 1.8 million deaths in one year [1].Non-small cell lung cancer (NSCLC) constitutes the predominant subset, accounting for 80-85% of cases, encompassing lung adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) [2,3].Over the past decade, the advance of immune checkpoint inhibitors (ICIs) has revolutionized the therapeutic landscape of lung cancer, especially targeting programmed death 1 (PD-1) and programmed death ligand 1 (PD-L1) [4].In the realm of locally advanced or metastatic NSCLC, the combination of PD-1 [5,6] or PD-L1 [7] blockade plus chemotherapy has demonstrated marked enhancements in both the progression-free survival (PFS) and overall survival (OS), compared with chemotherapy alone in the first-line setting, irrespective of the PD-L1 percentage.Despite the emergence as a core pillar of NSCLC treatment, nearly half of them could not benefit from this regimen, exhibiting objective response rates (ORR) spanning 48% to 64% [4].Notably, a significant proportion eventually develops resistance to ICIs post initial response [3].Hence, there is an urgent need to identify reliable predictive biomarkers for this therapeutic regimen.
Presently, PD-L1 expression, tumor mutational burden (TMB) and microsatellite instability (MSI) serve as officially sanctioned predictive biomarkers guiding ICI monotherapy, yet their screening accuracy remains wanting [3,8].In the context of ICIs combined with chemotherapy, the predictive capabilities of PD-L1 expression and TMB have not been conclusively established [9].The intricate interplay between tumor cells and the tumor microenvironment (TME) underscores the crucial impact of immunotherapy, propelling TME to the forefront as a noteworthy source of predictive markers for immunotherapy and a potential therapeutic target [10,11].The TME encompasses an array of constituents, including diverse cancer cells, immune cells, vasculature, stromal elements, signaling molecules, and extracellular matrix proteins, manifesting a variety of markers, such as CD4+ helper T cells, CD8+ tumor-infiltrating lymphocytes (TILs), CD68+ tissue-associated macrophages, FOXP3+ regulatory T cells (Tregs), and interferon-gamma (IFNγ) gene signature [3,11,12].Given this complexity, there is increasing interest in leveraging distinct TME characteristics to develop more reliable predictive biomarkers for immunotherapy [11].
The Homeobox (HOX) gene family orchestrates the synthesis of transcriptional regulators pivotal in embryonic development, pivotal for cell morphogenesis and organ differentiation [13].Homeobox C9 (HOXC9), a member of the HOX family [14], surfaces with anomalous expression across various cancer cell lines and tissues such as colorectal cancer [15], gastric cancer [16], and breast cancer [17].For instance, in breast cancer cell lines, HOXC9 overexpression curtailed cell proliferation while bolstering invasive capabilities, akin to promoting a phenotypic shift from proliferation to invasiveness [17].Recently, the role of HOXC9 in LUAD was explored to some extent.Bi et al. [18] elucidated its prognostic hazard in LUAD via hsa_circ_0020123/miR-495/HOXC9 axis.Similarly, Liu et al. [19] established a correlation between HOXC9 and the circRNA system in LUAD.Interestingly, they further found HOXC9 pertained to diminished infiltration of CD8+ T cells and dendritic cells, potentially contributing to antitumor immunity dysregulation and poorer outcomes for individuals with elevated HOXC9 expression, suggesting its prospective utility as a suppressive TME biomarker.However, the extent of its regulatory influence within the TME, as well as its predictive value for ICIs response and survival, necessitate further elucidation.
In the present study, we explored the regulatory role of HOXC9 on the TME both in vivo and in vitro.We then investigated the predictive role of HOXC9 in LUAD, comparing its efficacy with other markers in predicting ICIs responses and patient survival.Additionally, we explored the accuracy of different combinations, aiming to provide a more effective biomarker for personalized treatment.
Gene expression analysis
We employed the online tools in the 'Exploration' mode of Tumor Immune Estimation Resource 2.0 (TIMER2) (http://timer.cistrome.org/)[20] to acquire diverse expression patterns of HOXC9 across various cancers and neighboring healthy tissues.The raw data underwent normalization through log2(TPM) (Transcripts Per Kilobase of exon model per Million mapped reads) conversion.
Genetic alteration and promoter
The estimation of HOXC9 genetic changes was conducted using the web-based cBioPortal repository (https://www.cbioportal.org/)[21,22].This analysis relied on data from the TCGA pan-cancer atlas research database [23].The mutation distribution of HOXC9 was graphed using the Catalogue of Somatic Mutations (COSMIC) (http://www.sanger.ac.uk/cosmic/) [24], a publicly available resource that provides details about somatic mutations found in human cancers.
Survival prognosis analysis
Visualized through R's survival and survminer packages, the Kaplan-Meier assessment of LUAD's HOXC9, derived from the TCGA database alongside clinical data from our hospital, was conducted.The computed values encompassed the log-rank P-value, hazard ratio (HR), and 95% confidence intervals (CI).
Cells and reagents
The CMT167 and Lewis lung cancer (LLC) cell lines derived from mice were procured from European Collection of Authenticated Cell Cultures (ECACC), and the American Type Culture Collection (ATCC).They were nurtured in DMEM medium (Gibco, Thermo Fisher Scientific, Inc., USA), supplemented with 10% fetal bovine serum (Thermo Fisher Scientific, Inc.) and 1% Penicillin-Streptomycin (Solarbio, Beijing, China).Incubation was carried out in a 37° C environment with 5% CO2 and 95% humidity.Following procurement, the cell lines underwent authentication through Short Tandem Repeat (STR) analysis, and routine screenings verified their negative status for mycoplasma contamination.
Immunoblotting and qPCR techniques were utilized to confirm gene expression levels.The sequence of siHoxc9-1 informed the creation of shRNA constructs.These constructs, along with lentivirus packaging vectors, were introduced into HEK-293T cells, facilitating the generation of lentiviral particles for either shRNA-mediated gene silencing or Hoxc9 overexpression.The produced viral particles, with a multiplicity of infection (MOI) set at 20, were purified and then applied to cancer cells.Subsequent to viral transduction, puromycin was employed as a selective agent to establish cell lines stably expressing either silenced or overexpressed Hoxc9.
Real time quantitative polymerase chain reaction (RT-qPCR)
Total RNA was isolated from cells 48 hours post transfection utilizing TRIzol reagent (Invitrogen, USA).The sample loading scheme was determined based on the count of gene samples and cells examined.To conform this scheme, a mixture was prepared by combining iTaqUniversal SYBR RT-qPCR mix reagent (Bio-Rad, USA) and primers.All steps in the experimentation adhered to the manufacturers' guidelines.RT-qPCR was employed to identify the relative expression of Hoxc9 within CMT167 and LLC cells.Hoxc9-specific primers were used for PCR amplification, designed as per the subsequent sequences: Forward: 5'-CCGACCTGGACCCTAGCAAC-3'.Reverse: 5'-CCGACGGTCCCTGGTTAAAT-3'.
Actb-specific primers were used for PCR amplification, designed as per the subsequent sequences: Forward: 5'-GTGACGTTGACATCCGTAAAGA-3' Reverse: 5'-GCCGGACTCATCGTACTCC-3' An RT-qPCR apparatus (Roche, USA) was employed to initiate the reaction with a programmed setup.To each well, 1μL of cDNA template was subsequently introduced.Following this, the experiments were conducted in triplicate.The qPCR data was obtained by exporting the results, and the data's reliability was assessed and examined based on the melting curve analysis.
Western-blot
To determine Hoxc9 protein expression, Western blotting technique was employed.Initially, RIPA lysis buffer was used to extract total proteins from CMT167 and LLC cells.Protein quantification was carried out using the BCA Protein Assay Kit (Biovision, USA).For protein electrophoresis, a Western-Blotting electrophoresis system (Bio-Rad, USA) was utilized, with 20 micrograms of protein loaded per well.Subsequently, transfer onto a PVDF membrane (Bio-Rad) was executed.Following a blocking phase with 10% skim milk, the membranes underwent antigen-antibody interactions, employing specific antibodies including anti-DDK-tag monoclonal antibody (1:1000, sourced from Abcam), anti-Hoxc9 antibody (1:1000, obtained from Absin), and β-actin (diluted 1:2000; provided by CST).This binding sequence was conducted under AGING persistent shaking, maintained throughout the night at 4° C. Subsequently, on the following day, the membranes were treated with appropriate horseradish peroxidaseconjugated secondary antibodies, specific to each species (diluted 1:5000), and incubated for two hours at ambient temperature.The final visualization of the bands was accomplished using a chemiluminescence detection system.
Cell viability assay
Cell viability assessment was conducted using the Cell Counting Kit-8 (Takara, China).A cell suspension containing CMT167 and LLC cells in robust growth conditions was prepared, achieving a concentration of 2×10³ cells per 100 μl.Subsequently, 100 μL of the cell suspension was dispensed into individual wells of a 96-well plate and positioned within an incubator set at 37° C for initial incubation.
Following this, the subsequent day involved the introduction of 10 μL of CCK-8 solution into each well of a 96-well plate, which was followed by the removal of the preceding culture medium.After undergoing incubation for a duration of 2 hours, the absorbance (OD) was gauged at a wavelength of 450nm using the Multiscan FC microplate reader.The entire procedure was replicated thrice to ensure reliability and consistency.
Colony formation assay
The evaluation of lung cancer cell proliferation resulting from Hoxc9 knockdown or overexpression was performed using the colony formation assay.Post transfection, CMT167 and LLC cells were digested and then planted into 6-well plates with a cell density of 500 cells per well, in accordance with the characteristics of lung cancer cells.Subsequently, they were placed in a 37° C incubator with 5% CO2 and subjected to culturing.When visible clones emerged in most cells within the six-well plate after 10 days of culturing, the culturing process was halted.Following this, formaldehyde was employed for fixation lasting 15 minutes, succeeded by staining with crystal violet for a duration of 10 minutes.Utilizing camera software, images of the clone wells were captured, and clone counting was executed through ImageJ software.Each group underwent a triplicate evaluation of the wells.
Wound healing assay
Seeded into a 6-well plate were CMT167 and LLC cells.Upon reaching 80% confluence of the cells, the monolayer cells were gently scratched using a sterile micropipette tip.The floating cells were subsequently cleansed with PBS.At 0 hours and 24 hours, the process of wound healing within the scratch was observed.Each condition was evaluated in sets of three wells.
Cell invasion assay
To conduct the cell invasion assay, 24-well plates with chambers featuring an 8-µm pore size (Corning, Inc.) were utilized.Matrigel was diluted using serum-free medium at a ratio of 1:8.The resultant matrix gel was uniformly applied onto the membrane situated at the base of the Transwell chamber.This assembly was then placed within a 37° C incubator for an incubation period of 2 hours.Following a 48-hour transfection, the lung cancer cells suspension was formulated using serumfree culture medium, with a subsequent adjustment of cell density to 5×10 5 /ml.Simultaneously, 5×10 4 cells were seeded into each well of the Transwell chamber.In the lower compartment of the 24-well plate, 500-650μL of medium containing 10% FBS was typically introduced.Ultimately, the Transwell chamber was removed, fixed with 4% paraformaldehyde, and subsequently stained using 0.5% crystal violet.Cells present on the upper surface of the filter membrane were gently eliminated using a cotton swab.Employing an optical microscopy, images of cells traversing the invasion chamber were captured.
Tumor-bearing murine models and treatments
To establish tumor-bearing murine models, 1.0 × 10 6 LLC cells (infected with lentiviral particle) in 200uL PBS were implanted subcutaneously in the right hind flank of syngeneic BALB/c mice at day 0. Tumor growth was examined at 2, 5, 8, 11, 14, 17 and 20 days.On day 6, 36 mice were randomly assigned to each group, control with IgG isotype (N=6), control with anti-PD-1 (N=6), over expression of Hoxc9 with IgG (N=6), over expression of Hoxc9 with anti-PD-1 (N=6), knockdown of Hoxc9 with IgG (N=6) and knockdown of Hoxc9 with anti-PD-1(N=6).For the groups with anti-PD-1 therapy, an anti-PD-1 mAb (#BE0146, clone: RMP1-14) was administered i.p. every three days, starting at day 5.The control group was injected with IgG control (#BE0089, clone: 2A3) on each injection day.Tumor size was measured by digital calipers every three days and tumor volume was calculated as length×width 2 /2 (mm 3 ).Mice were sacrificed and tissues were analyzed at or before the ethical tumor volume limit of 1000 mm 3 .
Flow cytometry for cell cycle, apoptosis and infiltrated immune cells analyses
To assess the impact of Hoxc9 knockdown on the cell cycle of CMT167 and LLC, flow cytometry analysis was conducted.CMT167 and LLC cells were trypsinized and gathered through centrifugation at AGING 1000rpm for 5 minutes following 48 hours of siHoxc9 or siCtrl transfection.Subsequently, the fixed cells were suspended in pre-chilled 75% ethanol and placed in a refrigerator at 4° C overnight.Adhering to the guidelines from the flow cell cycle kit (Biyuntian, Shanghai), the cells were thoroughly mixed, then filtered through the membrane and transferred to a 5ml flow tube.1X Binding Buffer was prepared according to the instructions for flow cytometry.Collect transfected lung cancer cells and resuspend the cells with 1X staining buffer to a cell concentration of 5 x 10⁶ cells/mL.Add 5μL fluorescent dye-conjugated Annexin V and incubate for 10-15 min at room temperature.The cells were then resuspended in staining buffer.5μL propidium iodide (PI) dye was added and incubated on ice for 15 minutes.Flow cytometry was used to read the data.
Study population, treatment and assessments
Prospective enrollment took place at Shanghai Pulmonary Hospital, involving patients with advanced LUAD.Inclusion criteria encompassed age between 18 and 70, confirmed histologically or cytologically as stage IV LUAD (as per the 8th edition of the International Association for the Study of Lung Cancer Staging Handbook in Thoracic Oncology), lack of EGFR and ALK alterations, ECOG PS score of 0 or 1, no prior systemic therapy, presence of at least one measurable lesion following RECIST v1.1, and a projected life expectancy of over 3 months.Complete baseline characteristics or radiologic images, including chest CT, brain MRI, and PET-CT scan, were evaluated by two specialized radiologists.Before commencing the first-line combination of PD-1 blockade and chemotherapy, tumor samples were acquired through image-guided percutaneous lung biopsy.PD-L1 expression was evaluated as part of the initial screening procedure at the central pathology laboratory within our institution.This evaluation hinged on the application of the Tumor Proportion Score (TPS), a metric that gauges the proportion of viable tumor cells displaying partial or complete membrane staining, regardless of the intensity of staining.The resulting values from this assessment were routinely classified into three distinct groups: TPS <1%, TPS 1-49%, and TPS ≥50%.The study was approved by the ethics committee of Shanghai Pulmonary Hospital and was conducted in accordance with the Declaration of Helsinki (as revised in 2013).All participants signed informed consent forms before the initiation of this research undertaking.
Treatment was administered until radiographic progression, the occurrence of intolerable toxic effects, a determination made by the investigator, or the voluntary withdrawal of consent by the patient.If toxicity could be definitively attributed to a single agent, that specific drug could be discontinued.Tumor imaging sessions were planned at weeks 6 and 12, followed by intervals of 9 weeks until week 48, and subsequently, at 12-week intervals.The assessment of response was conducted in accordance with RECIST, version 1.1 [25].Additionally, there was a regular patient contact every 12 weeks to evaluate survival outcomes during the follow-up period.The primary endpoints encompassed OS and PFS.OS indicated the time from randomization to death resulting from any cause.PFS denoted the time from randomization to either disease progression or death, whichever transpired first, as evaluated by a blinded, independent central radiologic review.Secondary endpoints included the response, the duration of response, and safety.
Immunohistochemical staining
Paraffin-embedded lung cancer tissue samples were sliced to 4um thickness and affixed onto slides for subsequent deparaffinized hydration.The tissue was then restored to its postfixed state, exposing the antigen for precise antibody binding.Following antigen restoration under high temperature and pressure, endogenous peroxidase was neutralized through a 10minute treatment with hydrogen peroxide.Blocking utilized goat-derived animal serum.Antibodies were incubated with sections before immunohistochemical (IHC) staining.The slices were covered with diluted HOXC9, CD4, CD8, CD68 and FOXP3 (Abcam, Cambridge, MA, USA) mouse monoclonal antibodies, and this covering occurred in a humidified chamber for 2 hours at 37° C.After an overnight 4° C block, sections were fully covered with a secondary antibody and incubated at 37° C for 30 minutes.Subsequent DAB (Abcam, Cambridge, MA, USA) coloration was meticulously controlled.Slices were sealed with neutral gum, air-dried in a fume hood, and then microscopically analyzed for images.Staining assessments were performed simultaneously by two independent observers, scoring each stained section and reaching a consensus.According to the median percentage of positive cell proportion in each marker, the cut-off values of positive status were calculated.
Statistical analysis
Statistical analyses were performed using the aforementioned online databases and R software (version 3.6.3).Data are presented as mean ± SD.For the statistical analysis of experimental data, GraphPad Prism 9.5.1 (GraphPad Software; Dotmatics) was employed.Unpaired Student's t test and Wilcoxon rank-sum test and one-way ANOVA tests (with LSD as post hoc test) were used to calculate the significance of differences in data between and among groups.P<0.05 was considered to indicate a statistically significant result.
Availability of data and materials
Bioinformatics datasets presented in this study can be found in online repositories, and the datasets used and/or analyzed during experiments are available from the corresponding author on reasonable request.
Consent for publication
All authors have approved the manuscript for submission.
HOXC9 is overexpressed and correlated with prognosis in LUAD
Firstly, we assessed HOXC9 expression in pan-cancer data from TCGA and GTEx.The results showed that HOXC9 was significantly higher in 14 types of tumors than corresponding normal tissues including BLCA, BRCA, CHOL, ESCA, GBM, HNSC, LUAD, LUSC, PCPG, PRAD, SKCM, STAD, THCA, and UCEC, with TIMER2 pan-cancer analysis (all P<0.05) (Figure 1A).Then, the correlations between HOXC9 expression level and different clinical variables in LUAD were analyzed via UALCAN tools, which indicated higher HOXC9 expression was significantly associated with higher individual stages and tumor grade (Figure 1B-1E).To confirm this finding, we compared HOXC9 protein level between adjacent normal and cancerous tissues from clinical LUAD samples, and also found it was upregulated in tumor sample (Figure 1F, 1G).Moreover, the receiver operating curve, with the aera under the curve of 0.741 (95%CI 0.691-0.791),indicated a high diagnostic value of HOXC9 in LUAD, as is shown in Figure 1H.A survival analysis was performed, showing high expression of HOXC9 was associated with shorter OS (HR=1.52,95%CI 1.140-2.030,P=0.001) in LUAD (Figure 1I).
At the facet of genomic exploration, through the online database cBioPortal, HOXC9 genetic mutations and alterations landscape were investigated in various tumor samples from TCGA datasets (Supplementary Figure 1A).The main type of genetic alterations in HOXC9 was "mutation", which were observed in bulk of TCGA cancers, and the "amplification" was the second most common.The frequency of HOXC9 alteration in LUAD patients was mRNA high.With COSMIC online tool, the overview of the types of mutation was observed.The primary mutation type was missense substitution (52.43%), and the primary substitution mutation type was C>T (33.22%) (Supplementary Figure 1B).
HOXC9 expression is related to repressive immune cells and play a hazard role in responses to immunotherapy
The relationship between HOXC9 expression and immune repressive cells' infiltration (myeloid-derived suppressor cells and Tregs) was explored in Figure 2A, 2B.To further explore the role of HOXC9 in TME, we conducted a correlation analysis of HOXC9 expression with other immune infiltration cells in LUAD, which demonstrated that patients with relatively high HOXC9 possessed less dendritic cells and T central memory cells (Figure 2C).To further verify the negative role of HOXC9 in responses to immunotherapy, the data from TIDE showed relatively higher HOXC9 was related to worse OS in bladder cancer (P=0.043, Figure 2D), kidney cancer (P=0.027, Figure 2E), and melanoma (P=0.016, Figure 2F) treated with ICIs.
Knockdown of Hoxc9 reduces the proliferative capacity of in murine NSCLC cells
Taken together, HOXC9 is an oncogene that drives tumorigenesis and progression in lung cancer and can lead to poor prognosis by activating cell proliferation and causing immune dysfunction.We then performed a series of in vitro experiments to verify that knockdown of Hoxc9 at the cellular level could inhibit its malignant biological behavior.Subsequently, CMT167 and LLC cell lines were transfected with siRNA against Hoxc9.The knockdown efficiency of siHoxc9-1 and siHoxc9-2 was assessed by RT-qPCR, and the results showed that siRNA effectively reduced the mRNA level of Hoxc9 in CMT167 and LLC cells (Figure 3A, 3B).Subsequent western blot experiments also verified that the protein expression level of Hoxc9 was significantly reduced after transfection of siRNA (Figure 3C, 3D).CCK8 cell proliferation assay was used to detect the effect of knockdown of Hoxc9 on cell proliferation.Compared with siNC (si negative control) group, the proliferation rate of CMT167 and LLC cells in Hoxc9 knockdown group was significantly reduced (Figure 3E, 3F, P<0.001).Colony formation assay was used to detect the effect of Hoxc9 knockdown on cell colony formation.Compared with vector group, the number of CMT167 and LLC colony groups in Hoxc9 knockdown group was significantly reduced (Figure 3G, 3I, P<0.001).The results showed that knockdown of Hoxc9 expression reduced cell proliferation and significantly inhibited the colony formation ability of CMT167 and LLC cells.
Knockdown of Hoxc9 promoted apoptosis and cell cycle arrest
Flow cytometry was used to analyze the effect of Hoxc9 knockdown on cell cycle and apoptosis.The results of cell cycle experiments suggested that the number of AGING cells in S phase were significantly increased and the number of cells in G1 phase was decreased in the two Hoxc9 knockdown groups compared with the siNC group (Figure 4A-4D, P<0.001), indicating that the proliferation ability was significantly impaired after blocking Hoxc9 in CMT167 and LLC cells.Subsequently, flow cytometry PI/Annexin V was used to analyze the effect of Hoxc9 knockdown on the apoptosis of CMT167 and LLC lung cancer cells.The results suggested that the number of early apoptotic cells was significantly increased in the two Hoxc9 knockdown groups compared with the siNC group (Figure 4E-4H, P<0.001).The results showed that knockdown of Hoxc9 expression caused lung cancer cell apoptosis, and this biological effect may be related to cell cycle S phase arrest.
Knockdown of Hoxc9 reduces the migrative and invasive capacity of in murine NSCLC cells
As for the impacts of Hoxc9 knockdown on migration and invasion of CMT167 and LLC, we conducted the wound healing and Transwell assays to investigate.
Overexpression of Hoxc9 promoted cell proliferation and migration in lung cancer cell lines
The functional of Hoxc9 overexpression in lung cancer was determined by in vitro cell experiments.Ddktagged Hoxc9 overexpression vector was constructed and transfected into CMT167 and LLC cells.Western blot analysis confirmed that Hoxc9 was overexpressed in CMT167 and LLC cells after plasmid transfection (Figure 6A).Overexpression of Hoxc9 promoted lung cancer cell proliferation (Figure 6B, 6C) and colony formation (Figure 6D).In CMT167 and LLC cells, Hoxc9 overexpression promoted increased migration of lung cancer cells (Figure 6E, 6F) and invasion of lung cancer cells (Figure 6G).Overall, overexpression of Hoxc9 promotes lung cancer proliferation, colony formation, and tumor cell invasion and progression.
Differential Hoxc9 expression levels mattered tumor growth and CD8+ T cells infiltration with or without PD-1 blockade
With the establishment of tumor-bearing murine models, we verified the hypothesis that Hoxc9 expression level impacted tumor growth and sensitivity to PD-1 blockade, and the diagram was shown at Figure 7A.As shown in Figure 7B, a notable reduction in tumor growth rate was evident within the Hoxc9 knockdown group compared to the control group, both under IgG isotype conditions (P<0.05) and following PD-1 blockade intervention (P<0.01)(the tumor models photograph was in Supplementary Figure 2).Overall, while the efficacy of PD-1 blockade exhibited a positive trend over Hoxc9 knockdown, statistical significance was not achieved.Intriguingly, the overexpression of Hoxc9 nearly counteracted the effects of PD-1 blockade, as observed in the control group with IgG isotype.As The present study involved the disaggregation of mouse tumor tissues into individual cells for subsequent flow cytometry analysis.The outcomes of the sorting process are illustrated in Figure 7C.In the isotype control group, there was no statistically significant augmentation observed in tumor-infiltrating CD3+ T cells.However, upon the attenuation of Hoxc9 expression, a noteworthy increase was noted in activated CD8+ T cells (P<0.05) and IFN-γ production (P<0.01)(Figure 7D).Conversely, the overexpression of Hoxc9 elicited an opposing effect.Additionally, with the treatment of PD-1 blockade, concurrent with tumor growth dynamics, the downregulation of Hoxc9 exhibited a pronounced augmentation in CD3+ T cell infiltration, activation of CD8+ T cells, and an upsurge in IFN-γ production (P<0.001).These findings collectively suggest a potential synergistic antitumor immune response through the concomitant modulation of immune checkpoints and Hoxc9.
HOXC9 expression improves the predictive power for response to PD-1 blockade plus chemotherapy and survival in LUAD
Three officially approved predictive biomarkers, including PD-L1 expression, TMB, and MSI, guide the clinical application of anti-PD-1/PD-L1 monotherapy with moderate predictive performance [3,8].However, effective predictive markers for ICIs combined with chemotherapy remain underreported [8,9].Within the TME, characterized by distinct subpopulations of immune cells such as CD3+, CD8+, CD68+, and FoxP3+ cells, as well as other known or potential molecules, exists a diverse array of predictors that warrants continuous exploration [3,12].To investigate the predictive value of HOXC9 expression in the context of PD-1 blockade plus chemotherapy, we prospectively enrolled advanced LUAD patients with no prior systemic treatment for first-line immunochemotherapy, and baseline biopsy samples were obtained for IHC analysis.Finally, from January 2021 to March 2021, 31 patients treated with anti-PD-1 plus chemotherapy from our center were included in the final analysis (Figure 8A).The baseline characteristics of the participants were presented in Supplementary Table 1.The positive and negative expression of HOXC9, CD4, CD8, CD68, and FOXP3 divided by the median were presented in Figure 8B.Our initial assessment involved examining correlations between individual TME marker expressions and objective responses.Notably, responders exhibited significantly higher CD8 positive expression, as indicated in Figure 8C.PD-L1 expression demonstrated an increasing trend among responders, the expressions of HOXC9 and FOXP3 displayed a decreasing trend, but statistically insignificant.Worth mentioning is the substantially low overall expression of FOXP3.The clinical practice widely employs a trichotomy of PD-L1 TPS using cutoff values of 1% and 50%.Unfortunately, this trichotomy's predictive capacity for treatment response and survival proved to be even inferior to the dichotomy of PD-L1 TPS based on median values.
We then conducted a focused investigation to examine the correlations between various combinations of HOXC9 and other markers and their impact on objective response.The results revealed a distinct differentiation between responders and non-responders, as illustrated in Figure 8D.Subsequently, we carried out a more detailed exploration into the prognostic value of individual markers and combinations of HOXC9 with other markers.Notably, none of the individual markers exhibited predictive ability for survival, as depicted in Figure 8E.However, patients characterized as PD-L1+/HOXC9-and CD8+/HOXC9-both showed a notably extended PFS (P=0.027 and P=0.030), as indicated in Figure 8F.In general, the inclusion of HOXC9 tended to enhance the differentiation of patients with differential prognosis.Consequently, the expression of HOXC9 generally enhances the predictive accuracy for both treatment response and survival outcomes when PD-1 blockade is combined with chemotherapy in cases of LUAD.
DISCUSSION
The utilization of PD-1/PD-L1 inhibitors, combined or not combined with chemotherapy, has become the standard of care as first-line treatment for patients with locally advanced or metastatic NSCLC.Nonetheless, the identification of dependable biomarkers remains a notable deficiency, particularly in the context of ICIs complemented by chemotherapy.We proudly present, to the best of our knowledge, the pioneering investigation unveiling the predictive potency of HOXC9 in the realm of immunotherapy for LUAD patients.This groundbreaking study seamlessly amalgamates bioinformatics scrutiny, in vitro and in vivo experiments, alongside clinical data pertaining to responses to immunotherapy and patient survival, culminating in an exhaustive and insightful analysis.
HOXC9 was previously reported to play a promotive role in oncogenesis [18], repressive TME [19], as well as the chemotherapy resistance of bladder cancer [26] High HOXC9 expression in LUAD indicated better OS and DFS, whereas HOXC9 expression levels were not associated with OS or DFS in lung squamous cell carcinoma (LUSC), which may originate from differential pattern of immune cell infiltration [19].The irrelevance between HOXC9 and survival in LUSC was verified by us and the no positive correlation was found between HOXC9 expression and Treg in LUSC (Figure 2B).Thus, the present study focused on LUAD.In line with previous studies, we found that HOXC9 was related to oncogenesis, stage and OS in LUAD (Figure 1).In vitro experiments further demonstrated that regulation of Hoxc9 expression significantly impacted the proliferation, migration and invasion of two murine tumor cell lines, CMT167 and LLC (Figures 3-6).Moreover, HOXC9 level plays a hazard role in the response to ICIs among multiple cancers, which might be attributed to its negative relationship with antitumor immune function (Figure 2).However, its specific function under immunotherapy in LUAD is rarely investigated.
Our study pioneers the exploration of Hoxc9 expression's impact within the TME and its influence on survival outcomes in the context of ICIs therapy, employing in vivo experiments.As for tumor volume, with or without ICIs treatment, knockdown of Hoxc9 significantly inhibited the growth of tumors.A "hot" TME is crucial for ICIs to work, characterized with the presence of activated TIL landscapes [27,28], thus T cells sorting was used to reflect the extent of immune activation.Accordingly, T cell sorting was employed as a measure of immune activation.In line with tumor growth patterns, the downregulation of Hoxc9 was associated with a significant increase in activated CD8+ T cells and IFN-γ production.Notably, tumors subjected to both Hoxc9 knockdown and ICI treatment exhibited the most substantial growth inhibition and pronounced activation of CD3+/CD8+/IFN-γ+ T cells.This observation suggests a potential synergistic effect of PD-1 blockade and HOXC9 inhibition in enhancing the therapeutic efficacy in LUAD patients.
Admittedly, targeting HOXC9 or related pathways [18,19], represents a potential therapeutic strategy for LUAD or NSCLC, the development of targeted drugs is a long way off.In our view, its predictive value for response to ICIs treatment and survival deserves more attention.There is almost a consensus that a single marker is difficult to predict outcomes accurately in immunotherapy, due to patient heterogeneity and the complexity of the antitumor immunity [29], accumulating evidence showed that incorporating distinct biomarkers and even multi-omics is the key to developing robust predictors of response.The TME features have long been considered as favorable predictive markers for immunotherapy, which was firstly classified into four different subtypes according to the PD-L1 expression and TILs status [30].In terms of advanced NSCLC, several studies have found that differential types of TME, based on PD-L1 expression and CD8+ TILs or CD68+ macrophages, could accurately predict treatment responses and survival in ICI monotherapy [31][32][33].A recent prospective study by Grell et al. also revealed combining FOXP3 and CD68 could better predict both PFS and OS than FOXP3 alone in NSCLC patients treated with ICI monotherapy [34].More strikingly, multiplex immunohistochemistry/ immunofluorescence have been developed to combine the TME features, demonstrating robustly predictive value [8,32,33].
As for ICIs combined with chemotherapy, only two studies have reported valid results.Expression of major histocompatibility complex class II (MHC-II) antigen presentation pathway expression was recently reported to successfully identify patients most likely to benefit in non-squamous NSCLC [9].The other study preliminarily revealed the predictive value of CD8/PD-L1 or CD68/PD-L1 co-expression for camrelizumab plus chemotherapy as first-line treatment in patients with locally advanced or metastatic NSCLC [8].The present study also found existing single marker couldn't reliably predict the response to immunochemotherapy and outcomes.However, the addition of HOXC9 tended to better distinguish patients with differential responses and prognosis, and the PD-L1+/HOXC9-and CD8+/HOXC9-patients correlated with significantly longer PFS.Thus, in general, results from the small sample suggest HOXC9 expression could improve the predictive power for response to PD-1 blockade plus chemotherapy and survival outcomes in LUAD.
Several limitations should be acknowledged.First, public data on HOXC9 in lung cancer immunotherapy cohort are missing.Second, our research focused on phenotypes, so more rigorous in vivo experiments are needed to confirm the current findings and help to reveal detailed characteristics of HOXC9 in LUAD.Third, the TME in the stomal area couldn't be evaluated as all available samples were biopsy specimens from tumor area.Lastly, although our study was prospectively designed and baseline characteristics were well balanced, considering the limited sample size, the present results should be cautiously interpreted and still need further independent validation with large sample size.
The combination of various biomarkers, including patient characteristics, imaging, pathology, peripheral blood, and genomic data, holds potential in guiding holistic treatment approaches.The integration of machine learning with multimodal features emerges as a promising method for predicting treatment responses [35].Further research would investigate the specific cell subtypes expressing HOXC9 in LUAD microenvironment with single-cell RNA sequence, identify the downstream interactive molecules, and clarify the detailed regulatory role of HOXC9 in LUAD immunotherapy.
CONCLUSIONS
In summary, this study integrated data from multiple bioinformatic public databases, in vitro and in vivo experiments, and baseline biopsy samples from our center to identify a potential biomarker for ICIs plus chemotherapy in LUAD.The integration of HOXC9 expression profiles with other clinically pertinent markers holds significant potential as a comprehensive prognostic tool, applicable in a variety of contexts including ICIs alone, in combination with other therapies, or across different treatment lines.Additionally, it's crucial to recognize the mounting body of evidence that highlights the oncogenic potential of HOXC9, thus positioning it as a key target for tailored therapeutic strategies within LUAD immunotherapy.Further studies are warranted to validate our findings and improve our understanding of the oncogenic mechanisms of HOXC9.number K24-217Y covers both animal and human experiments.The informed consent was obtained from all subjects involved in the study.
Figure 1 .
Figure 1.Analyses of HOXC9 expression and the diagnostic and prognostic value in LUAD.(A) The expression distribution of HOXC9 in tumor tissues and normal tissues with TIMER2, respectively; (B-E) The correlation between HOXC9 expression and clinical variables of LUAD; (F) The HOXC9 protein level in adjacent healthy tissue; (G) and in the tumor tissue; (H) The diagnostic value of HOXC9 in LUAD; (I) The OS analysis of HOXC9 in LUAD patients.(*, P<0.05; **, P<0.01; ***, P<0.001).
Figure 3 .
Figure 3. Confirmation of Hoxc9 knockdown and its impacts on viability and proliferation of CMT167 and LCC cells.(A, B) The knockdown level of Hoxc9 was detected by RT-qPCR.(C, D) The knockdown level of Hoxc9 was detected by western blot.(E, F) The cell viability was significantly impaired under Hoxc9 knockdown.(G-I) The influenced proliferation competence was detected with colony formation after 10 days.(*P<0.05,**P<0.01,***P<0.001).
Figure 5 .
Figure 5. Hoxc9 knockdown inhibited the migration and invasion of tumor cells.(A) Remaining length of wound of CMT167 under knockdown of Hoxc9 was significantly longer compared to control (200 μm).(B) Remaining length of wound of LLC under knockdown of Hoxc9 was significantly longer compared to control (200 μm).(C, D) The statistical analysis of above assays.(E-G) Count of migrated CMT167 and LCC cells was significantly decreased after knockdown of Hoxc9.(*P<0.05,**P<0.01,***P<0.001)(100 μm).
Figure 6 .
Figure 6.Overexpression of Hoxc9 elicited a promotive role in tumor cells progression.(A) Confirmation of Hoxc9 overexpression in two cell lines with WB. (B, C) Significantly higher cell viabilities were observed in the DDK-Hoxc9 groups of CMT and LCC.(D-F) Overexpression of Hoxc9 resulted in a more counts of colony and wider remaining wound (200 μm).(G) Overexpression of Hoxc9 resulted in more invasive counts of cells (100 μm).(*P<0.05,**P<0.01,***P<0.001).
Figure 8 .
Figure 8.The predictive value of HOXC9 and TME markers for LUAD patients treated with PD-1 blockade plus chemotherapy.(A) Diagram of clinical samples selection and analysis.(B) IHC analysis of 5 proteins based on samples from 4 patients with different level of HOXC9.(C) The different distributions of responder population under 5 types of TME marker.(D) Analysis of predictive value of combination of HOXC9-and other markers.(E) PFS and OS analysis based on different status of markers.(F) PFS and OS analysis based on different combinations.(*P<0.05,**P<0.01,***P<0.001).
The study was supported by the Three Years Action to Accelerate the Development of Traditional Chinese Medicine Plan (ZY[2018-2020]-FWTX-3004), Start-up Fund for Talent Introduction of Shanghai Pulmonary Hospital (20180101), 2022 Development Fund of Discipline-Department of Radiotherapy of Shanghai Pulmonary Hospital, 2023 Development Fund of Discipline-Department of Radiotherapy of Shanghai Pulmonary Hospital, Science and Technology Innovation Action Plan of the Science and Technology Commission of Shanghai Municipality (23Y11908700 and 19411950300), Technical standards project of the Science and Technology Commission of Shanghai Municipality (21DZ2201900), and Hospital-level Key Project of Shanghai Pulmonary Hospital (FKLY20006), and the Fundamental Research Funds for the Central Universities (22120220615). | 2024-03-07T16:04:46.807Z | 2024-03-05T00:00:00.000 | {
"year": 2024,
"sha1": "874b8616a6414df2e6e410838f7f47799f8b2803",
"oa_license": "CCBY",
"oa_url": "https://www.aging-us.com/article/205637/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f38b0a01088c2e094a75daa3e25eb5c7d419ba21",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248378284 | pes2o/s2orc | v3-fos-license | Thermoheliox: effect on the functional hemodynamics of the human brain
A kinetic study of the effect of thermoheliox (inhalation of a helium and oxygen mixture, 70 °C) on the functional hemodynamics of the human brain by functional magnetic resonance imaging was carried out. The dynamic responses of the BOLD signal were found to be biphasic. An empirical equation describing the first phase of the hemodynamic response to visual stimulus was proposed. It was shown that preliminary inhalation of thermoheliox stimulates the hemodynamic responses by slowing down the vasoconstriction.
The functional hemodynamics of the brain is a key process in the human central nervous system. In response to a signal, a local impulsive increase in the concentrations of oxygen and glucose (the main energy substrate of nerve cells), mediated by a specifi c behavior of the brain microvessels (neurovascular coupling), takes place in the excitation region of the neuron system. The impulse duration is approximately 10 s. This process is a highly important electromechanical feature of the brain as a biocomputer, which forms the basis for energy processes of receptor sensing, memory, thinking, and neurophysiological responses.
The unique opportunities for studying the neurovascular coupling are provided by functional magnetic resonance imaging (fMRI) based on the recording of BOLD responses (BOLD is blood-oxygen-level-dependent), superparamagnetic characteristics of oxygenated hemoglobin. 1-6 Our studies of the detailed mechanism of neurovascular coupling are based on experimental investigation of the process dynamics, chemical kinetic approach, and analysis of kinetic models. 7 An attractive and potentially effi cient method to infl uence the effi ciency of oxygen transfer to nervous tissues during excitation is the use of thermoheliox, a breathing mixture consisting of oxygen and helium, at 50-100 С.
Thermoheliox as a new medical technique is used for the therapy of respiratory diseases, ischemic strokes, dysfunctions of pregnancy, etc. It should be emphasized that high-temperature thermoheliox is eff ective for the treatment of coronavirus infection. [8][9][10][11] This communication presents a quantitative study of the eff ect of thermoheliox on the dynamics of the hemodynamic response of the cortex excitation region after a visual signal. The study included three volunteers (two males and one female). As experimental results, we obtained 30 dynamic sets of BOLD signals before and after inhalation of thermoheliox (21% oxygen, 79% helium, temperature of 70 С) for 0.5 h.
Experimental
Thermoheliox (a mixture of helium (60-80%) and oxygen (20-40%) heated to 100 С) was clinically tested and approved in various fi elds of modern medicine. The study was performed using the Heliox-Extreme apparatus. 12 The device is equipped with a set of sensing measuring devices and algorithms tailored to particular pathologies.
Functional magnetic resonance imaging data were obtained on a Philips Achieva dStream magnetic resonance scanner with a constant magnetic fi eld strength of 3.0 T. An echo-planar pulse sequence (EPI) with the following parameters was used: repetition time (TR) of 3000 ms, echo time of (TE) of 30 ms, EPI factor of 240, number of slices of 40-50 (depending on the size of the head of the test subject), slice thickness of 3 mm, number of acquisitions (NSA) of 1, time of one dynamics of 3 s, number of dynamics of 120.
The visual stimulation consisted in presenting a test subject with 15 blocks of alternating rest phases (the subject looks at a black display for 21 s) and visual stimulus phases (the subject looks at a chess board image fl ashing at a frequency of 4 Hz for 3 s). The stimuli were presented using a special attachment, the start of the visual stimulation paradigm was synchronized with the start of fMRI scanning. For each subject, three fMRI scans were successively run before and after thermoheliox inhalation.
The BOLD response maps were obtained and processed using the SPM12 program. 13 Comparison of the maps showed a statistically signifi cant contrast increase in the visual cortex in all subjects, but no signifi cant response to visual stimulation in other brain loci. For each subject, an individual zone of activated visual cortex was identifi ed by multiplying all his/hers BOLD response maps. In these zones, the data were averaged over 15 dynamics and the error of the mean was determined. This gave an individual time dependence of the relative intensity of the BOLD signal for each subject: the BOLD values for time t were normalized to the value for t = 0 (the time of the start of visual stimulus presentation).
The statistical data were processed using the Graphpad Prism software. 14 Several specifi c features of BOLD signal dynamics were detected and studied.
Results and Discussion
Biphasic character of the functional hemodynamic response. The response of the excitation region to a visual signal has a complex pattern and includes, at least, two dynamic phases. Typical dependences of BOLD responses to short visual stimuli are depicted in Fig. 1.
It can be seen that the BOLD signal includes two response waves (two phases) diff ering in intensity. The induction period (2 s) is followed by an intense main hemodynamic surge (maximized at t = 6 s) followed by decay and a secondary, much weaker response (maximized at t = 15-18 s). The complex nature of the hemodynamic response is attributable to multipathway nature of coupling of the nerve impulse and vascular response. Note that the biphasic nature of the functional hemodynamic response was predicted from the kinetic modeling of the process. 15 Empirical equation describing the fi rst phase of the functional hemodynamic response. We proposed an empirical equation adequately describing the fi rst phase of the hemodynamic response: where the induction period and the growth dynamics of the BOLD signal (vasodilation process) are refl ected by the function At n ; the decay dynamics of the eff ect (vasoconstriction process) is described by the exponential function exp(-kt); the parameter n can correspond to the number of intermediate stages preceding accumulation of the vasodilator intermediate. A characteristic feature of function (1) is that it has a maximum. In addition, where t max is the time it takes to reach a maximum. Thermoheliox stimulation of the hemodynamic response by slowing down the vasoconstriction. As can be seen from Fig. 1, preliminary inhalation of thermoheliox stimulates the BOLD signal. Using empirical equation (1), it is possible to identify the stage (vasodilation or vasoconstriction) that is aff ected by the preliminary thermoheliox inhalation. It follows from Eq. (1) that ln{[f(t) 0 -1]/[f(t) He -1]} = ln(A 0 /A He ) + (k He -k 0 )t, (3) where f(t) 0 is the function BOLD(t) before thermoheliox inhalation, A 0 and k 0 are characteristics before the inhalation; f(t) He , A He , and k He are the function BOLD(t) and parameters of the hemodynamic process after inhalation of thermoheliox heated to 70 C for 30 min.
Experimental data on the hemodynamic response kinetics of the fi rst phase of the BOLD signal in the coordinates of Eq. (3) are shown in Fig. 3. The straight line in Fig. 3, b has a negative slope; hence, k He < k 0 . This means that preliminary inhalation of thermoheliox slows down the vasoconstriction (relaxation) of the vascular dilation induced by the nervous impulse in the excitation region.
The conducted experimental study of the BOLD signal dynamics revealed the biphasic character of the process. It was shown that the preliminary inhalation of thermoheliox extends the hemodynamic impulse by slowing down the vasoconstriction. | 2022-04-26T13:21:31.605Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "210f22e8740f95e31539d3f5306d2bc77e950d61",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "ae6fcd4c4eff55e56b3bbf4ae08664ce57814b59",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
13988573 | pes2o/s2orc | v3-fos-license | Coherent manipulation of electronic states in a double quantum dot
We investigate coherent time-evolution of charge states (pseudo-spin qubit) in a semiconductor double quantum dot. This fully-tunable qubit is manipulated with a high-speed voltage pulse that controls the energy and decoherence of the system. Coherent oscillations of the qubit are observed for several combinations of many-body ground and excited states of the quantum dots. Possible decoherence mechanisms in the present device are also discussed.
Initiated by various experiments on atomic systems, studies on coherent dynamics have been extended to small-scale quantum computers [1]. Nano-fabrication technology now allows us to design artificial atoms (quantum dots) and molecules (coupled quantum dots), in which atomic (molecular)-like electronic states can be controlled with external voltages [2,3,4]. Coherent manipulation of the electronic system in quantum dots and a clear understanding of decoherence in practical structures are crucial for future applications of quantum nanostructures to quantum information technology.
In this Letter, we describe the coherent manipulation of charge states, in which an excess electron occupies the left dot or the right dot of a double quantum dot (DQD). The coherent oscillations between the two charge states are produced by applying a rectangular voltage pulse to an electrode. Although this scheme is analogous to experiments on a superconducting island [5], our qubit is effectively isolated from the electrodes during the manipulation, while it is influenced by strong decoherence during the initialization due to the coupling with the electrodes. This controlled decoherence provides an efficient initialization scheme.
We consider a DQD consisting of left and right dots connected through an interdot tunneling barrier. The left (right) dot is weakly coupled to the source (drain) electrode via a tunneling barrier [see Fig. 1(a)]. The conductance through the device is strongly influenced by the onsite and interdot Coulomb interactions [6]. In the weakcoupling regime at a small source-drain voltage, V sd , a finite current is only observed at the triple points, where tunneling processes through the three tunneling barriers are allowed. Under an appropriate condition where only the interdot tunneling is allowed, Coulomb interactions effectively isolate the DQD from the source and drain electrodes. In this case, we can consider two charge states, in which an excess electron occupies the left dot (|L ) or the right dot (|R ) with electrochemical potentials E L and E R , respectively. In practice, each charge state involves (many-body) ground and excited states. When the two specific states are energetically close to each other and the excitation to other states can be neglected, the system can be approximated as a two-level system (qubit). It is characterized by the energy offset, ε ≡ E R − E L , and the interdot tunneling, which gives an anti-crossing energy, ∆ [3]. The effective Hamiltonian is where σ x and σ z are the Pauli matrices for pseudo-spin bases of |L and |R . When E L and E R of the localized states are crossed by changing V sd , for instance, as shown by dashed lines in Fig. 1(b), the eigenenergies, E b and E a , for bonding and anti-bonding states respectively, show anti-crossing as shown by solid lines. The coherent oscillation of the system is expected with the angular frequency given by Ω = √ ε 2 + ∆ 2 /h. The DQDs (samples I and II with almost identical dimensions) used for this work are defined in a GaAs/AlGaAs heterostructure containing a twodimensional electron gas, as shown in Fig. 1(a). The experiments were performed in a magnetic field of 0.5 T at lattice temperature T lat < ∼ 20 mK, unless otherwise noted. The effective electron temperature, however, remained at T elec ∼ 100 mK. Each dot in both samples contains about 25 electrons and has an on-site charging energy E c ∼ 1.3 meV. The interdot electrostatic coupling energy is U ∼ 200 µeV. Figure 2(a) shows the current spectrum I of sample I when the voltage, V R , on the right gate [G R in Fig 1(a)] is swept at a large source-drain voltage V sd = 650 µV. Each dot contains several energy states in the transport window of width eV sd , and resonant tunneling between them is clearly resolved as current peaks, two of which (resonances α and β) are shown in Fig. 2(a). Resonance α (β) is probably associated with the ground state of the left dot and the first (second) excited state of the right dot. In the vicinity of each peak, a two-level system (qubit) can be defined by only taking into account a single discrete state in each dot, |L and |R . The qubit parameters, ε and ∆, and tunneling rates, Γ L and Γ R , respectively for left and right barriers, can be controlled independently by exter- nal gate voltages, and are determined from the elastic current spectra [3,4,6]. In order to manipulate the qubit, a rectangular voltage pulse is applied to the drain electrode. This switches the source-drain bias voltage V sd between V p = 650 µV, at which the tunneling between the DQD and the electrodes is allowed, and zero, at which the DQD is effectively isolated from the electrode due to Coulomb interactions. At the same time, due to the electrostatic coupling between the QDs and electrodes, the pulse also switches the energy offset between ε = ε 0 at V sd = V p and ε 1 at V sd = 0 (ε 1 − ε 0 ∼ 30 µeV), as shown for ε 1 = 0 in Fig. 1(b). We designed the pulse sequence for initialization, coherent manipulation, and measurement in the following way.
For initialization, a relatively large source-drain voltage, V sd = V p , was applied under appropriate gate voltages, so that E L and E R are in between the electrochemical potentials of the source and drain electrodes, µ S and µ D (µ S > E L , E R > µ D = µ S − eV p ). For example, in the off-resonance condition (ε = ε 0 < ∼ −∆) as shown in Fig. 1(c), electron-phonon interaction provides finite inelastic tunneling, whose rate is Γ i , between the two states [4]. We adjusted Γ L and Γ R to make them sufficiently larger than Γ i so that the current would be limited by the inelastic tunneling between the dots. This sequential tunneling process accumulates an excess electron in the left dot, providing the initial state |L . Note that this initialization works even in the resonance condition (ε 0 = 0) whenhΓ L andhΓ R are greater than ∆. Significant decoherence from the dissipative tunneling processes holds the system in the localized state |L rather than in the delocalized states.
For coherent manipulation, we non-adiabatically change V sd to zero, which shifts the energy offset to ε = ε 1 . A typical energy diagram for ε 1 ∼ 0 is shown in Fig. 1(d). In this case, the inter-dot electrostatic coupling prevents the electron tunneling into and out of the DQD by any first-order tunneling process, and negligible current flows through the DQD. Hence the system is well approximated by Eq. 1. The system prepared in |L goes back and forth between |L and |R coherently. We maintain V sd = 0 for the pulse length, t p = 80 -2000 ps, during which the oscillation continues.
Then, the large bias voltage is restored for the measurement [ Fig. 1(e)]. The large tunneling rates (hΓ L , hΓ R > ∆) effectively stop the coherent manipulation, and thereby provide a strong measurement. If the system ends up in |R after the manipulation, the electron tunnels out to the drain electrode and contributes to the pumping current. The system goes back to the initial state |L after waiting longer than Γ −1 L + Γ −1 R . However, no pumping current is expected for |L , which is already the initial state. Hence, this pumping current depends on the probability of finding the system in |R .
In practice, we repeatedly applied many pulses with a repetition frequency f rep = 100 MHz and measured the average dc current, I, which comprises the coherent pumping current and inelastic current that flows during initialization. In order to improve the signal-to-noise ratio, we employed a lock-in amplifier technique to measure the pulse-induced current I p by switching the pulse train on and off at a low modulation frequency of 100 Hz. We estimated the average number of pulse-induced tunneling electrons, n p = I p /ef rep .
A color plot of n p as functions of V R and t p is shown in Fig. 2(b). Sweeping V R mainly shifts E R and changes the energies ε 0 and ε 1 simultaneously by keeping ε 1 − ε 0 almost constant. A clear oscillation pattern is observed in a wide range of V R . Local-maxima of the oscillation amplitude appeared for relatively long t p at gate voltages indicated by long-dashed lines, where the two states must be resonant (ε 1 = 0) during manipulation. We confirmed that the oscillation patterns in Fig. 2(b) are attributed to resonances α (clear oscillation) and β (faint oscillation) from their V p dependence. The energy offsets ε 0,i and ε 1,i for resonance i (α and β) are also shown in Fig. 2(b).
The oscillation pattern for resonance α shows that the amplitude and period decrease as ε 1,α goes away from ε 1,α = 0. The current amplitude is asymmetric about ε 1,α = 0, and the oscillation continues until ε 1,α ∼ 40 µeV. These features are qualitatively consistent with a calculation based on the time-dependent Schrödinger equation and Eq. 1 using a time-dependent ε(t) with a finite rise time (∼ 100 ps) of the pulse [5]. The strong oscillation in the range of ε 0,α < 0 < ε 1,α can be understood as an interference between coherent time-evolution at ε(t) ∼ 0 during the finite rise time of the pulse and that during the fall time of the pulse. It should be noted that clear oscillation is seen even at ε 0,α = 0 (indicated by a black dotted line), where two localized states are resonant during the initialization but off-resonant during the manipulation. This feature is convincing evidence that there is strong decoherence during initialization. The density matrix calculation for our initialization condition gives the decoherence rate,h(Γ R + Γ L )/2 ∼ 30 µeV [7], which is greater than ∆ = 9 µeV for resonance α. However, the Coulomb blockade effect eliminates this decoherence during manipulation, as mentioned before. Therefore, we presume that the oscillation at ε 0,α = 0 is induced by the modulation of the decoherence rate. In contrast, the disappearance of the oscillation at ε 0,β = 0 (indicated by a white dotted line) for resonance β (∆ = 30 µeV) might arise from the inefficient initialization that provides a statistical mixture of bonding and antibonding states.
The qubit state can be manipulated arbitrarily. Ideally, the quarter period oscillation at ε 1 = 0 corresponds to the π/2 pulse that prepares a superposition state 1 √ 2 (|L + i|R ). Leaving a state at ε = ε 2 ≫ ∆ for a specific time t φ gives a phase shift ε 2 t φ /h between |L and |R . Therefore arbitrary states can be prepared by tailoring the pulse waveform ε(t) even at a constant ∆. The demonstration of phase-shift operations will be published elsewhere [8]. Figure 2(c) shows typical n p (t p ) traces at ε 1 = 0 for resonances α and β. The oscillation can be fitted well by an exponential decay of the cosine function and a linearly decreasing term, except when t p < ∼ 100 ps (the rise time of the pulse). The last term comes from the fact that the inelastic tunneling current is blocked during the manipulation. Actually Γ i ∼ (6 ns) −1 obtained for α from this fitting is consistent with the inelastic dc current, which should be eΓ i in the absence of the pulse. The offset, A ∼ 0.6, and amplitude, B ∼ 0.3, of the oscillation for α are comparable to the ideal case (A = 0.5 and B = 1 at ε 1 = 0), although they are degraded by the finite rise time of the pulse and non-ideal initialization/measurement processes. The oscillation frequency Ω and the decoherence time T 2 can be obtained from the fitting (Ω/2π ∼ 2.3 GHz and T 2 ∼ 1 ns for resonance α at ε 1 = 0).
We estimate how the decoherence rate T −1 2 depends on the energy offset ε 1 [ Fig. 3(a)
3(b)]
, and the lattice temperature T lat [Fig. 3(c)]. Here ∆, which is determined from the fitting (hΩ at ε 1 = 0), can be varied by changing the gate voltage V C on the central gate G C as shown in Fig. 2(d). Although decoherence from first-order tunneling processes is eliminated during manipulation, other decoherence sources are significant in our measurement.
Firstly, background charge fluctuations and noise in the gate voltages affect ε and ∆, which change the oscillation frequency Ω and dephase the system [9,10]. The fluctuation,ε, of ε in our sample ranges between 1.6 µeV, which is estimated from the fluctuation of I in a frequency range 0.1 -5 Hz, and 3 µeV, which is the narrowest linewidth of the resonant peak we obtained in sample I in the weak coupling limit (∆ < 1 µeV) [4]. The corresponding decoherence rate, Γ ε = |dΩ/dε|ε to the lowest order, forε = 1.6 µeV is shown by a solid line in Fig. 3(a). This qualitatively explains the large decoherence rate at ε 1 = 0, where the system is sensitive tõ ε. However, the decoherence rate at ε 1 = 0 cannot be explained with this model, and should be dominated by other mechanisms.
Secondly, we consider cotunneling effects. Although the first-order tunneling processes are prohibited during manipulation, higher-order tunneling (cotunneling) processes can occur because relatively high Γ L and Γ R were chosen for efficient initialization. For simplicity, we only estimate one of the cotunneling processes, which scatters the electron from the anti-bonding state to the bonding state (eigenstates of the qubit), from second order Fermi's golden rule. This gives a transition rate Γ cot = (8/h)∆(hΓ) 2 /U 2 at ε 1 = 0, V sd = 0 and zero temperature when the barrier is symmetric (Γ = Γ L = Γ R ) [11]. Γ cot shown by solid lines in Figs. 3(b) and 3(c) actually includes thermal broadening in the source and drain [T elec = 100 mK is assumed in Fig. 3(b)]. Although we cannot determine the parameters precisely, Γ cot is comparable to the observed T −1 2 . We believe that the cotunneling effect is significant in our measurement but can be easily diminished by choosing smaller Γ and by making the interdot electrostatic coupling energy U larger.
Lastly, we discuss electron-phonon interactions, which is an intrinsic decoherence mechanism in semiconductor QDs [4,12]. Spontaneous acoustic phonon emission remains even at zero temperature and causes the inelastic tunneling between the two states [4]. The phonon emission rate estimated from the inelastic current or from the fitting with Eq. 2 is Γ i ∼ (4 -20 ns) −1 , which depends on ∆, at ε = -30 µeV. Γ i of our interest at ε = 0 should be faster because of the spatial overlap of the eigenstates, and may be comparable to the observed T −1 2 . By assuming Ohmic spectral density for simplicity, the spin-boson model predicts the decoherence rate Γ sb = π 4 g∆ coth(∆/2k B T lat ) for ε = 0, where g = 0.03 is the dimensionless coupling constant that was chosen to fit with the temperature dependence data [see dashed line in Fig. 3(c)] [13,14]. This g is a reasonable value to explain the inelastic current [13], and thus phonon emission seems to be significant in our system. Therefore, the qubit is strongly influenced by lowfrequency fluctuation when |ε| > ∼ ∆, cotunneling at high tunneling rates, and acoustic phonons at high temperature. The resonances α and β actually involve excited states in the right dot, and the relaxation to the ground state should also cause decoherence. Other mechanisms, such as the fluctuation of ∆ and the electromagnetic environment, may have to be considered to fully understand the decoherence. It should be noted that the quality of the coherent oscillation was improved by reducing highfrequency noise from the gate voltages and the coaxial cable. The remaining noise may also contribute to the decoherence. We hope that some decoherence effects can be reduced by further studies.
In summary, we have successfully manipulated the artificial qubit in a double quantum dot. Coherent oscillations are observed for several combinations of ground and excited states. In the present experiments, there was no condition where two kinds of oscillations coincided, indicating that the two-level system is still a good approximation. However, application of a two-step voltage pulse, which consecutively adjusts the system at two resonances (α and β for instance) in a short time, would mix three quantum states coherently. Therefore, DQDs are promising for studying multi-level coherency [15], and the experiments can be extended to electron-spin manipulations and two-qubit operations [16].
We thank T. Brandes, T. Itakura, Y. Nakamura, K. Takashina, Y. Tokura, and W. G. van der Wiel for their stimulating discussions and help. | 2017-04-05T07:21:22.064Z | 2003-08-19T00:00:00.000 | {
"year": 2003,
"sha1": "d7167266fd84f4d436561e2806838c3a60506cbd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0308362",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "829d3c9ea303629faa39770af8270d151910a89b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
231701661 | pes2o/s2orc | v3-fos-license | Lack of viable severe acute respiratory coronavirus virus 2 (SARS-CoV-2) among PCR-positive air samples from hospital rooms and community isolation facilities
Background: Understanding the extent of aerosol-based transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is important for tailoring interventions for control of the coronavirus disease 2019 (COVID-19) pandemic. Multiple studies have reported the detection of SARS-CoV-2 nucleic acid in air samples, but only one study has successfully recovered viable virus, although it is limited by its small sample size. Objective: We aimed to determine the extent of shedding of viable SARS-CoV-2 in respiratory aerosols from COVID-19 patients. Methods: In this observational air sampling study, air samples from airborne-infection isolation rooms (AIIRs) and a community isolation facility (CIF) housing COVID-19 patients were collected using a water vapor condensation method into liquid collection media. Samples were tested for presence of SARS-CoV-2 nucleic acid using quantitative real-time polymerase chain reaction (qRT-PCR), and qRT-PCR-positive samples were tested for viability using viral culture. Results: Samples from 6 (50%) of the 12 sampling cycles in hospital rooms were positive for SARS-CoV-2 RNA, including aerosols ranging from <1 µm to >4 µm in diameter. Of 9 samples from the CIF, 1 was positive via qRT-PCR. Viral RNA concentrations ranged from 179 to 2,738 ORF1ab gene copies per cubic meter of air. Virus cultures were negative after 4 blind passages. Conclusion: Although SARS-CoV-2 is readily captured in aerosols, virus culture remains challenging despite optimized sampling methodologies to preserve virus viability. Further studies on aerosol-based transmission and control of SARS-CoV-2 are needed.
Aerosol-based transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and its overall contribution to the coronavirus disease 2019 (COVID-19) pandemic has been a subject of intense debate. 1,2Aerosol-based transmission is defined as transmission through inhalation of particles dispersed through the air as aerosols.7][8] Among the 3 air sampling studies where viral culture was attempted, 1 study using a water-vapor condensation collection method resulted in the successful isolation of infectious SARS-CoV-2. 8These infectious aerosols were collected in a hospital room occupied by 2 confirmed COVID-19 patients, and the viral genome sequences matched the sequence in a respiratory sample of 1 of the patients in the room. 8Although these data strongly support aerosol-based transmission of SARS-CoV-2, further studies are needed to demonstrate the reproducibility of these results across a variety of settings and patients.
We have previously described results of positive air samples in 2 hospital rooms housing COVID-19 patients early in the course of illness, though viral culture was not performed in that pilot study. 5ence, as a follow-up to that study, we validated and adapted a water vapor condensation collection method similar to that reported by Lednicky et al 8 to collect air samples from hospital rooms and community isolation facilities housing COVID-19 patients, and we evaluated quantitative real-time polymerase chain reaction (qRT-PCR) positive air samples for the presence of infectious SARS-CoV-2 through viral culture.We hypothesized that infectious SARS-CoV-2 could be isolated from air samples obtained from rooms of patients early in their illness, when viral shedding from the respiratory tract tends to peak. 9,10
Study setting
This study was conducted in airborne-infection isolation rooms (AIIRs) at the National Centre of Infectious Diseases, Singapore; as well as a community isolation facility (CIF) housing confirmed COVID-19 patients not requiring inpatient care.The AIIRs were completely enclosed negative pressure rooms with 12 air changes per hour, and they housed either 1 or 2 COVID-19 patients each.Airflow direction was from the ceiling towards air vents located behind the patients' beds, just above the ground.The size of these rooms was ˜8 m by ˜8 m by ˜2.5 m, with a total air volume of ˜160,000 L. Rooms housing patients within the first week of illness were preferentially selected because this is when viral shedding and infectivity is highest, and all patients were confirmed to have COVID-19 via SARS-CoV-2 PCR at the hospital laboratory.Clinical data were collected from medical records using a standardized data collection form.
The CIF was a large, naturally ventilated facility, which used to function as an exhibition center, with a capacity of housing 2,700 patients.The facility was not enclosed, with one side of the facility facing the sea, providing thorough ventilation.In accordance to Singapore public health policy, COVID-19 patients who did not require further inpatient care were transferred to a community isolation facility for continued isolation until at least 21 days after illness onset. 11This facility was divided into cubicles housing 10 patients each, separated by surrounding temporary walls from other cubicles, without ceilings or windows.Airflow direction was variable given the natural ventilation and was not measured.Clinical data could not be collected from the patients at this facility.
Air sampling
Air samples were collected using a BioSpot-VIVAS BSS300-P bioaerosol sampler (Aerosol Devices, Fort Collins, CO), which collects airborne particles using a water-vapor condensation method into a liquid collection medium at a flow rate of 8 L per minute.This method has been previously used to successfully isolate other respiratory viruses such as influenza. 12,13A GK4.162 (RASCAL) cyclone (Mesa Laboratories, Butler, NJ) was affixed to the sampling inlet during selected sampling cycles to filter out particles >4.34 μm in diameter, selectively collecting only small particles <4.34 μm in size at the flow rate of 8 L per minute.The sampling inlet was placed at a height of 1 m and a distance of 1 m from the patients' beds in both the AIIRs and the CIF.The air sampling configurations illustrating relative distances between the samplers and the patients in both the hospital AIIR and the CIF are shown in Figures 1 and 2, respectively.
To validate this sampling protocol, for the first 2 sampling cycles, 6 additional NIOSH BC-251 bioaerosol samplers (US National Institute for Occupational Safety and Health) connected to SKC AirChek TOUCH Pumps (SKC, Eighty-Four, PA) were used (methodology as previously described 5 ) to ensure that results were concordant between the different sampling methods.Because results between both sampling methods were concordant for the first 2 sampling cycles, only the BioSpot sampler was used for subsequent sampling.
Quantitative real-time polymerase chain reaction methods
Air samples were tested for the presence of SARS-CoV-2 via qRT-PCR.Sample RNA extraction was conducted using the QIAamp viral RNA mini kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions.Real-time PCR assays targeting the envelope (E) gene 14 and orf1ab assay adapted from Drosten et al 15 were used for the detection of SARS-CoV-2 RNA.For the E gene assay, 20 μL reaction mix was prepared with 12.5 μL SuperScript III Platinum One-Step qRT-PCR Kit (ThermoFisher Scientific, Waltham, MA, USA) buffer, 0.75 mM Mg 2 SO 4 , 5 μL RNA, 400 nM each of forward primer (E_Sarbeco_F1-ACAGGTACGTTAATAGTTAATAGCGT) and reverse primer (E_Sarbeco_R2-ATATTGCAGCAGTACGCAC ACA) with 200 nM probe (E_Sarbeco_P1-(FAM) ACACTAGCCA TCCTTACTGCGCTTCG (BHQ1)).Thermal cycling conditions included reverse transcription at 55°C for 10 minutes, an initial denaturation at 95°C for 5 minutes, followed by 45 cycles of 95°C for 15 seconds, 58°C for 1 minute.For the orf1ab assay, 20 μL reaction mix was prepared with 12.5 μL SuperScript III Platinum One-Step qRT-PCR Kit buffer, 0.5 mM Mg 2 SO 4 , 5 μL RNA, 800 nM each of the orward primer (Wu-BNI-F-CTAACATGTTTATCACCCGCG) and reverse primer (Wu-BNI-R-CTCTAGTAGCATGACAC CCCTC) with 400 nM probe (WU-BNI-P-(FAM) TAAGACATGT ACGTGCATGGATTGGCTT (BHQ1)).Thermal cycling conditions included reverse transcription at 55°C for 10 minutes, an initial denaturation at 95°C for 5 minutes, followed by 45 cycles of 95°C for 15 seconds, 60°C for 1 minute.All samples were run in duplicate and with both assays.Positive detection was recorded as long as amplification was observed in at least 1 assay.
Virus culture methods
PCR-positive aerosol samples collected by the BioSpot-VIVAS BSS300-P sampler were further evaluated for virus viability via cell culture.Monolayers of Vero C1008 cells (ATCC-1586) in T25 flasks were inoculated with 1 mL inoculum (500 μL of the swab sample and 500 μL Eagle's MEM) and cultured at 37°C, 5% CO 2 with blind passage every 7 days.Thereafter, 140 μL cell culture was used for RNA extraction and real-time PCR twice per week, to monitor changes in target SARS-CoV-2 genes as an indication of successful viral replication.In the absence of cytopathic effects and real-time PCR indication of viral replication, blind passages continued for a total of 4 passages before any sample was determined to be negative of viable SARS-CoV-2 virus particles.
Informed consent was waived as there was no direct interaction with the patients.Clinical data were collected as part of a separate retrospective cohort study of COVID-19 patients (National Healthcare Group Domain Specific Review Board, reference no.2020/01122).
Setting and patient selection
In total, 12 sampling cycles were carried out in hospital AIIRs, 8 in rooms housing 2 patients and 4 in rooms housing 1 patient (Table 1).One room was sampled twice 48 hours apart; hence, the total number of unique patients involved was 19.18 patients (94.7%) were male, and median age was 43 years (interquartile range [IQR], 34-48).The median day of illness was day 5 (IQR, 4-7), and 12 (63.2%)patients were symptomatic on the day of sampling.None of the patients needed supplemental oxygen or underwent aerosol-generating procedures in the 24 hours preceding sampling, and none was critically ill, intubated, or on mechanical ventilation.Also, 9 sampling cycles were carried out in the CIF cubicles, with 10 patients in each cubicle at the time of sampling.In this patient group, day of illness could not be determined, but air sampling was performed within 7 days of a PCR-positive clinical swab finding.Formal clinical data collection from CIF patients was not permitted due to lack of time to obtain institutional board review approval for clinical data collection at this external site while COVID-19 patients were available for study.
Air samples
Of 12 BioSpot air samples from hospital AIIRs, 6 (50%) were positive for SARS-CoV-2 nucleic acid.Among these positive air samples, concentrations of virus copies per cubic meter of air ranged from 178.9 to 2,738.4 (using the ORF1ab gene target for calculation as this target was consistently detected across all positive samples).Of these, 4 samples were size fractionated to contain only particles <4.34 μm in diameter, while 2 were not size fractionated.All positive samples were from rooms with at least 1 symptomatic patient, and all patients were early in the illness course (within seven days).For the first 2 sampling cycles, NIOSH aerosol samplers were also used to collect aerosols in the rooms to validate the BioSpot sampling method (Fig. 1).Results were agreeable (Table 2), and SARS-CoV-2 nucleic acid was detected in aerosols <1 μm, 1-4 μm, and >4 μm in diameter.
Only 1 (11.1%) of 9 samples from the CIF was positive for SARS-CoV-2 nucleic acid, with a concentration of 978.3 ORF1ab gene copies per cubic meter of air.This was a nonsize-fractionated sample.The other 4 size-fractionated samples and 4 non-size-fractionated samples were negative via qRT-PCR.Virus cultures of all 7 qRT-PCR-positive BioSpot air samples were negative after 4 blind passages.
Discussion
In this air sampling study conducted in hospital rooms and a community isolation facility, air samples collected from the environments of COVID-19 patients were frequently positive for SARS-CoV-2 nucleic acid via PCR, though PCR-positive air samples were negative on viral culture.Although SARS-CoV-2 nucleic acid has been frequently detected in air samples, viable virus isolation from the air has been reported by only 1 study. 8ednicky et al 8 described the isolation of viable virus in from air samples collected from 1 hospital room housing 2 COVID-19 patients.Although these data support aerosol-based transmission, they are limited by their small sample size.More studies including larger samples sizes or longitudinal cohorts are needed to accurately measure the amount of viable virus in aerosols emitted by COVID-19 patients.
It is well understood that viral shedding from the respiratory tract of COVID-19 patients tends to peak early in the disease course, 9,10 and results from both our current study and previous pilot study demonstrate our ability to capture aerosolized SARS-CoV-2 from nearby patients with known clinical cycle threshold (Ct) values below 21. 5 Taken together, it is plausible to estimate that the risk of SARS-CoV-2 infection through inhalation is lower when COVID-19 patients are later in their illness and have higher clinical Ct values and SARS-CoV-2 in nearby aerosols is below the detection limit or is not present.But again, measurements of infectious virus emission rates across a variety of patients are needed to more accurately assess this risk.
Size fractionation of aerosols containing infectious virus is also an important component of measuring risk of infection because the size of a virus-laden aerosol is indicative of where in the respiratory tract it can be deposited and the type of infection or immune response that might ensue.Aerosol size fractionation was not performed by Lednicky et al 8 ; thus, the amount of infectious SARS-CoV-2 carried in respirable aerosols (<5 μm in diameter) is unknown.We attempted to address this knowledge gap by performing aerosol size fractionation in our study, and although SARS-CoV-2 RNA was detected in respirable aerosols, virus cultures were negative.Notably, we detected SARS-CoV-2 RNA in aerosols <1 μm in diameter, which we failed to accomplish in our previous study.In vitro cell culture is considered the gold standard method for determining virus infectivity, but technical limitations to this approach must be considered when studying environmental samples containing low virus concentrations when compared to human clinical samples.Successfully isolating infectious virus from air samples is known to be challenging due to the degradation of viral material during the collection process. 16To increase the probability of viable virus recovery from air samples collected in this study, we used a similar water-vapor condensation method and bioaerosol collector as described by Lednicky et al. 8 The BioSpot collection device used is designed to mimic the physiological conditions of the human lungs, to better preserve pathogen viability, which is often compromised when using dry cyclone air sampling devices.Although a significant proportion of air samples in our study were positive for SARS-CoV-2 via qRT-PCR, all virus cultures were negative despite optimization of the sampling methodology.However, the failure to isolate viable virus from the air does not necessarily mean that patients are not shedding infectious aerosols.Numerous factors may compromise successful virus isolation, such as sample collection media, sample transfer, sample processing, and in vitro cell culture infection method, which may be enhanced by using engineered cell lines.For example, Vero E6 cells expressing TMPRSS2 have been demonstrated to enhance SARS-CoV-2 isolation. 17For viral culture in this study, we used standard Vero E6/C1008 cells.Such technical and virological caveats should be considered when interpreting air sampling data.
Our study has several further limitations.Although our sample size was larger than earlier pilot studies performed by our group and other authors, the generalizability of our findings is still limited for several reasons.First, we preferentially selected patients early in their illness course and with a lower Ct value because we hypothesized this would maximize the possibility of successfully isolating viable virus.Second, most of our patients had only mild disease, without requiring supplemental oxygen, and our results may have varied if we sampled patients on high-flow oxygen or in the intensive care unit.Thirdly, sampling was conducted in a naturally ventilated community isolation facility, and airborne-infecition isolation hospital rooms (designed to limit transmission of airborne infections).Each of these settings has a different heating, ventilation, and air conditioning (HVAC) system compared to other community venues (eg, schools and bars), thus limiting the generalizability of our findings to transmission in the community, which is where bulk of SARS-CoV-2 transmission occurs.
9][20] However, these events appear to be sporadic, and the secondary attack rates among household or close contacts in other transmission studies are reported to be lower than what would be expected for a virus classically classified as airborne, [21][22][23] indicating that SARS-CoV-2 transmission dynamics display significant heterogeneity. 24Furthermore, large cohort studies have demonstrated that a minority of infectious individuals account for a disproportionate number of secondary cases. 25lthough this further complicates our collective understanding of how and by whom SARS-CoV-2 is readily transmitted, it can partly explain the sporadic nature of SARS-CoV-2 superspreading events.
Aerosol-based transmission of SARS-CoV-2 appears to be occurring, and knowledge gaps remain regarding its overall contribution to the COVID-19 pandemic.More information is needed on inhalation dose and patient factors that influence shedding of infectious aerosols.Follow-up air sampling studies with a larger sample size across a wider range of patients and healthcare and community settings are needed.Acknowledgments.We thank the DSO National Laboratories environmental detection team and clinical diagnostics team for BSL3 sample processing and analysis; and logistics and repository team for transport of biohazard material, inventory, and safekeeping of received items.We also thank Bill Lindsley from Centers for Disease Control and Prevention (CDC) National Institute for Occupational Safety and Health (NIOSH) for loaning the NIOSH samplers used in this study, and Than The Son from Duke-NUS Medical School for NIOSH sampler preparation.
Financial support.This study was funded by Naval Medical Research Center (NMRC) COVID-19 Research Fund (grant no.MOH-000469), National Healthcare Group (NHG)-NCID COVID-19 Centre (grant no.COVID19 CG0002), NHG Fund (grant no.QG20R/COTX), and internal funds from DSO National Laboratories.Oon-Tek Ng is supported by an NMRC Clinician Scientist Award (grant no.MOH-000276).Kalisvar Marimuthu is supported by an NMRC Clinician-Scientist Individual Research Grant (grant no.CIRG18Nov-0034).Additional funding support from a private donation from Sumitomo Mitsui Banking Corporation to the National Centre for Infectious Diseases which supplemented funding for this study.The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Conflicts of interest.
All authors report no conflicts of interest relevant to this article.
, none detected; PCR, polymerase chain reaction; Ct, cycle threshold; NIOSH, National Institute for Occupational Safety and Health.a PCR cycle threshold values from patient respiratory samples collected within 72 h of room sampling.b Six 840-L samples pooled and analyzed together (5,040 L air total).Size fractionation retained.c One 3,840-L sample.
Table 1 .
Clinical Characteristics of COVID-19 Patients and Corresponding Air Sampling Results From Their Hospital Airborne-Infection Isolation Rooms PCR cycle threshold value from patient's respiratory sample collected within 72 hours prior to room air sampling.PCR target using E or N2 gene.NIOSH aerosol samples were collected from rooms 1 and 2 for BioSpot sampler validation.Results were agreeable (see Table2).
Note.ND, none detected; Ct, cycle threshold; NA, not available.a b ORF1ab gene copies.c
Table 2 .
NIOSH and BioSpot Aerosol Samples Collected From Double-Occupancy Airborne-Infection Isolation Rooms of COVID-19 Patients | 2021-01-26T06:16:21.037Z | 2021-01-25T00:00:00.000 | {
"year": 2021,
"sha1": "31ca9f1ee255bab0500260b5b9255939b9837513",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/BABC764B2945B2CF2764992984464969/S0899823X21000088a.pdf/div-class-title-lack-of-viable-severe-acute-respiratory-coronavirus-virus-2-sars-cov-2-among-pcr-positive-air-samples-from-hospital-rooms-and-community-isolation-facilities-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "73e6f221690b0bff912d18d3a917405f0e5a6097",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252876301 | pes2o/s2orc | v3-fos-license | Specific pattern of linguistic impairment in Parkinson’s disease patients with subjective cognitive decline and mild cognitive impairment predicts dementia
Abstract Objective: Parkinson’s disease patients with subjective cognitive decline (PD-SCD) and mild cognitive impairment (PD-MCI) have an increased risk of dementia (PDD). Thus, the identification of early cognitive changes that can be useful predictors of PDD is a highly relevant challenge. Posterior cortically based functions, including linguistic processes, have been associated with PDD. However, investigations that have focused on linguistic functions in PD-MCI are scarce and none of them include PD-SCD patients. Our aim was to study language performance in PD-SCD and PD-MCI. Moreover, language subcomponents were considered as predictors of PDD. Method: Forty-six PD patients and twenty controls were evaluated with a neuropsychological protocol. Patients were classified as PD-SCD and PD-MCI. Language production and comprehension was assessed. Follow-up assessment was conducted to a mean of 7.5 years after the baseline. Results: PD-MCI patients showed a poor performance in naming (actions and nouns), action generation, anaphora resolution and sentence comprehension (with and without center-embedded relative clause). PD-SCD showed a poor performance in action naming and action generation. Deficit in action naming was an independent risk factor for PDD during the follow-up. Moreover, the combination of deficit in action words and sentence comprehension without a center-embedded relative clause was associated with a greater risk. Conclusions: The results are of relevance because they suggest that a specific pattern of linguistic dysfunctions, that can be present even in the early stages of the disease, can predict future dementia, reinforcing the importance of advancing in the knowledge of linguistic dysfunctions in predementia stages of PD.
Introduction
Parkinson's disease (PD) is the second most common neurodegenerative disease after Alzheimer's disease (Hirtz et al., 2007), and it is characterized by motor symptoms and nonmotor characteristics. Mild cognitive impairment is common in nondemented PD patients (PD-MCI), affecting 30-50% depending on the progression of the disease (Galtier et al., 2016;Monastero et al., 2018). PD-MCI is considered a risk factor in the development of dementia (PDD), with a high conversion rate to PDD in the years following PD-MCI diagnosis (Galtier et al., 2016;Hoogland et al., 2017). More than 80% of PD patients will develop PDD after 20 years (Hely et al., 2008).
Subjective cognitive decline (SCD) is very common in the elderly and has gained attention as a predictor of future cognitive decline and AD dementia (Jessen et al., 2020). Patients or their caregivers are often the first to notice subtle changes in the patient's cognitive functioning and the presence of this subjectively experienced cognitive decline may be one of the first signs of cognitive impairment. PD patients frequently report subjective cognitive complaints (Lehrner et al., 2014) but the number of investigations focused on PD-SCD is still limited and their clinical meaning is unclear. The results suggest that PD-SCD is a risk factor for developing PD-MCI (Erro et al., 2014;Hong et al., 2014) and PDD (Galtier et al., 2019). Thus, the early identification of minor cognitive changes in PD patients that can be useful predictors of PDD should be a high-priority objective for researchers and also for clinicians.
The language domain can be conceptualized as a set of complex behaviors involving several processes. The disorders in motor speech execution caused by an impairment in tone, range of motion and coordination of speech effectors are well described in PD patients (Smith & Caplan, 2018). Language production and comprehension have also been studied in PD, although they are less-well known compared to other cognitive domains, and many of the results are difficult to interpret. This is partially explained by the diversity of tasks designed to evaluate linguistic functions. Language production, measured by word generation or naming tasks, is usually affected in PD patients, even in the early stages of the disease (Bocanegra et al., , 2015. Moreover, a disadvantage in action naming (Bertella et al., 2002;Cotelli et al., 2007;Rodríguez-Ferreiro et al., 2009) and action generation (Crescentini et al., 2008;Péran et al., 2003) compared to nouns has been described. These results are consistent with recent evidence regarding brain functioning and the hypothesis that different categories of content may be represented in different regions of the brain depending on the sensory and motor processes involved in the acquisition of these contents (Auclair-Ouellet et al., 2017).
On the other hand, comprehension has been assessed in PD with a variety of sentences of diverse syntactic complexity, with special attention being paid to subordinate clauses. Several studies have reported that deficits in comprehension occur in highly complex sentences that include this type of clause and that performance is influenced by other cognitive processes such as attention, working memory and executive functions (Grossman, 1999;Grossman et al., 1992;Hochstadt, 2009;Hochstadt et al., 2006). However, other results have questioned these results, reporting that comprehension deficits in nondemented PD patients also occur in less complex sentences, without a clear association with executive resources (Bocanegra et al., 2015;Skeel et al., 2001).
Despite the different investigations that have focused on the study of linguistic functions in PD patients, and the evidence of language impairment in PDD (Noe et al., 2004), the lack of studies focused on predementia stages of PD, that is, patients with PD-SCD or PD-MCI is surprising. In the studies based on the Movement Disorder Society (MDS) Task Force criteria for PD-MCI (Litvan et al., 2012), language domain has not usually been explored (Pedersen et al., 2013(Pedersen et al., , 2017Weintraub et al., 2015) or assessment has been limited to standardized naming tasks (i.e. Boston Naming test) (Broeders et al., 2013;Domellöf et al., 2015;Marras et al., 2013;Pan et al., 2022;Pigott et al., 2015;Santangelo et al., 2015). Moreover, it is probable that a significant number of studies, previous to the MDS criteria, have included PD patients with MCI in groups of patients without cognitive impairment, complicating the interpretation of these results and clinical value for the characterization of cognitive impairment in PD patients without dementia.
To date, only a few cross-sectional research works have focused on the study of linguistic functions in PD-MCI and none of them include PD patients with SCD. The scarce available results report word-finding difficulties in PD-MCI characterized by less words per minute and more pauses within utterances . Other authors showed that PD-MCI patients showed an altered performance in action and object naming, whereas PD patients without MCI exhibited a selective difficulty for action naming (Bocanegra et al., , 2015. Moreover, patients with and without MCI exhibited comprehension difficulties in sentences with different levels of complexity (with and without subordinate clause). Interestingly, differences between PD patients and controls in action naming and comprehension of sentences without a subordinate clause remained after adjusting for executive functions. On the contrary, differences between groups in subordinate clause sentence comprehension disappeared after executive function adjustment (Bocanegra et al., 2015).
There are no previous studies, to the best of the authors' knowledge, focusing on studying the linguistic functions in predementia stages of PD (SCD and MCI) by a long-term follow-up study. Thus, the overall objective here was to conduct a longitudinal study evaluating linguistic functions in a sample of PD patients with SCD and MCI. The aims of the present study were: (1) to investigate language performance in patients with PD-SCD and PD-MCI with a comprehensive battery of linguistic tests; and (2) to explore which of the language subcomponents at the baseline better predict the development of PDD after a mean follow-up of 7.5 years. The hypotheses are that the PD-MCI group, compared to the controls and PD-nSCD, will present more severe production and comprehension language difficulties while the PD-SCD group will present mild language difficulties, primarily at the production level. Selective language disturbances will be useful predictors of dementia development.
Subjects
The study is part of a larger research project developed by the School of Psychology, University of La Laguna, in collaboration with the Department of Neurology, N.S. La Candelaria University Hospital and the Tenerife Parkinson Disease Association. The sample consisted of 66 participants: 46 patients with idiopathic PD, according to the clinical criteria for the diagnosis of PD (Hughes et al., 1992), and 20 healthy normal controls (HC). Patients were recruited consecutively by a neurologist specializing in movement disorders, in the regular neurology consulting department of the above hospital, and were evaluated in the "on" state, using the Hoehn & Yahr Scale (Hoehn & Yahr, 1967) and the Unified Parkinson's Disease Rating Scale (UPDRS; Fahn & Elton, 1987). The exclusion criteria were as follows: (a) dementia associated with PD or global cognitive deterioration defined by the Mini-Mental State Examination (MMSE) score <24 (Folstein et al., 1975); (b) history of major psychiatric disorder; (c) drug or alcohol abuse; (d) visual and/or auditory perception disorders limiting the ability to take the test; (e) history of stroke and/or head injury with loss of consciousness; and (f) deep brain stimulation surgery. Patients and controls were matched in age, education, gender, manual preference and estimated IQ (Information subtest) (Wechsler, 1997). The Beck Depression Inventory was administered for the assessment of mood state (Beck et al., 1961). All participants were informed about the aims of the investigation, participated voluntarily and gave their informed consent. The data were obtained in accordance with the regulations of the local ethics committee and in compliance with the Helsinki Declaration for Human Research. Demographic and clinical characteristics of PD patients and controls are shown in Table 1.
Diagnosis of PD-SCD, PD-MCI and dementia
The participants were evaluated with a neuropsychological protocol including two tests for the attention, executive, memory and visuospatial domains (see supplementary material). PD-SCD was established on the basis of a semi-structured interview, previously published by the authors (Galtier et al., 2019). The patients and care partners provided their subjective opinions regarding whether the patient had experienced changes in each of the following cognitive functions: attention, memory, language, visuoperceptual skills and executive functions. Regarding PD-MCI diagnosis, the criteria proposed by the MDS were applied (Litvan et al., 2012). Impairment in neuropsychological tests is demonstrated by the performance of 1.5 standard deviations or more below the mean of the control group. The absence of significant functional decline was confirmed based on a semi-structured interview and clinical impression of the subject's general cognitive function. The patients' follow-up assessments were to a mean of 7.5 (median 7.4; interquartile range 6.83-8.00; absolute minimum-maximum 6.30-8.40) years after the baseline. A diagnosis of PDD was made on the basis of the MDS criteria . Decreased global cognitive functioning and deficits severe enough to impair daily life should be present, according to level 1 of the MDS criteria .
Linguistic functions assessment
Instruments to assess the linguistic domain were designed by the authors and presented by computer software. Language production was assessed by two tests. The naming task consisted of 60 visual stimuli: 40 items representing elements (noun naming test, NNT) and 20 items depicting action scenes (verb naming test, VNT). Nouns and actions were paired in variables known to affect naming: every action item was paired with two noun items in word frequency and nominal agreement (Alameda & Cuetos, 1995;Cuetos & Alija, 2003). The stimuli were line drawings in black and white (Cuetos et al., 1999;Druks & Masterson, 2000). Participants were instructed to name the concept represented, either the noun corresponding to the drawn element or the verb corresponding to the depicted action. Language production was also assessed by the action generation test (AGT), designed to evaluate lexical access by semantic associations. The AGT consisted of 20 auditory nouns divided into two categories: ten nouns without a phonologic derived action (AGTnf) (e.g. pencil-to write) and ten nouns with a phonologic derived action (AGTf) (e.g. conversation-to converse). Participants were instructed to generate a semantic associated action to each stimuli considering that phonologic derived actions were not allowed. Thus, AGTf entails cognitive inhibitory processes and was considered more difficult compared to AGTnf.
Sentence comprehension was examined by the anaphora test (APHT) and the center-embedded subordinate clauses test (CESCT), both instruments designed by the research group. The APHT assesses the ability to make the necessary inferences to comprehend sentences involving pronominal anaphora. The test consisted of twenty sentences, ten of which were nonambiguous (APHTna), in which the anaphora is resolved by the gender key (e.g. Marta gave a painkiller to Enrique as he had a headache) and the other ten were ambiguous (APHTa), where gender does not solve the ambiguity, requiring a semantic interpretation of the sentence to solve it (e.g. Elena laughed at Teresa's jokes, because she was very funny). Participants were instructed to listen to the sentences and look at the computer screen where two words would appear during each sentence auditory presentation. These words correspond to the characters in the opening sentence, that is, the subject (Marta) and the object (Enrique) of the sentence. After each sentence presentation, participants were asked to answer a question regarding either the subject (Who gave a painkiller?) or the object (Who had a headache?) of the sentence. The CESCT design consists of twenty sentences with two levels of syntactic complexity. Ten sentences were simple declarative in form, without a subordinate clause (CESCTsimple) (e.g. The bellboy greeted the slim receptionist). The other ten sentences were made more complex syntactically by the addition of a center-embedded relative clause (CESCTcomplex), and in which the subject of the main clause is in turn the subject of the relative clause (e.g. The girl who pinched her cousin was naughty). All sentences used the active voice and were considered nonconstrained since the nouns could exchange places without violating the semantic coherence of the sentence (e.g. the girl and the cousin are equally capable of pinching each other). As in the APHT, participants were instructed to listen to the sentences and look at the computer screen where two words would appear during each sentence auditory presentation. These words correspond to the subject (bellboy) and the object (receptionist) of the sentence and participants were asked to answer a question regarding either the subject (Who greeted?) or the object (Who was greeted?) of the sentence.
Statistical analysis
A nonparametric statistic was used to evaluate differences between groups because the Shapiro-Wilk W test showed that data deviated from the standard normal distribution. The Mann-Whitney and Kruskal-Wallis tests were used to compare pairs of groups and multiple groups, respectively. Bonferroni correction for multiple comparisons was applied and effect size measures were calculated. Chi-squared tests were used for categorical data. Correlational Comparisons between healthy controls and PD group was significant. c Comparisons between PD-nSCD and PD-SCD was significant. d Comparisons between HC and PD-MCI was significant. e Comparisons between PD-nSCD and PD-MCI was significant. f Comparisons between PD-SCD and PD-MCI was significant.
analyses were performed using Spearman rank to examine the association between the language performance and other cognitive functions (p < .01). Logistic regression analyses were conducted to examine the performance of linguistic functions in PD patient subgroups and to examine the pattern of linguistic dysfunctions as predictors of PDD. The independent predictive values of the variables were expressed in odds ratio (OR) with 95% confidence interval (CI). p < .05 was set as the level of statistical significance. All the analyses were performed with SPSS-PC software version 24.0 for Windows.
Results
Twenty-two PD patients (47.8%) met the criteria for PD-MCI, fourteen patients (30.5%) were classified with a diagnosis of PD-SCD, and the remaining ten patients (21.7%) were classified as PD-nSCD. The neuropsychological performance for HC and PD patients (PD-nSCD, PD-SCD, PD-MCI) is available as supplementary material. Briefly, the PD-MCI group showed a poor performance, compared to HC and PD-nSCD, in the four evaluated domains (attention, executive, memory and visuospatial). Moreover, the PD-MCI group also performed poorly, compared to PD-SCD, in the executive domain and visuospatial domain. No significant differences were found between PD-SCD and HC in any of the neuropsychological tests.
Linguistic function analyses
Four PD patients did not complete the AGT and APHT. The linguistic functions assessment showed that the PD-MCI group performed poorly, compared to HC, in the naming tests (NNT p = .000, r = .66; VNT p = .004, r = .53) and comprehension tests (APHTna p = .048, r = .41; CESCTsimple p = .002, r = .56; CESCTcomplex p = .002, r = .56). Similar patterns were found between the PD-MCI and PD-nSCD groups. Moreover, significant differences were also found between the PD-MCI and PD-nSCD groups in the action generation test (AGTnf p = .003, r = .61; AGTf p = .031, r = .50). The PD-SCD group only performed poorly, compared to HC and PD-nSCD group, in the AGT (Table 2). PD patients were classified as "altered" or "nonaltered" to explore the percentage of patients who presented a clinically deficient performance in the linguistics tests. Linguistic impairment was demonstrated by the performance of one standard deviation or more below the mean of the control group (Table 3).
The paired difference between groups showed a significantly greater percentage of PD-MCI patients who presented a clinically deficient performance, compared to PD-SCD and/or PD-nSCD subjects, in the production tests (NNT, VNT, AGTnf) and comprehension tests (APHTna and CESCTcomplex). No significant differences were found in the percentage of patients with a clinically deficient performance in CESCTsimple, that was high in the three groups. In addition, a significantly greater percentage of PD-SCD patients (similar to PD-MCI group) presented a clinically deficient performance in the VNT and AGTnf, compared to PD-nSCD subjects, who did not perform in a clinically altered manner.
Linguistic functions as a predictor of PD dementia
Conversion to dementia during the follow-up study was more frequent in patients with PD-MCI (50%) compared to patients with PD-SCD (33.3%) and more frequent in the PD-SCD group compared to patients with PD-nSCD (14.3%). The percentage of patients who converted to dementia and those who did not, together with their baseline clinical characteristics are available as supplementary material. Seven PD patients did not participate in the follow-up study (two PD-MCI, two PD-SCD and three PD-SCD).
Logistic regressions were used to explore the association between linguistic performance and dementia development. According to the results shown in Table 4, an altered VNT (OR = 12.00) and AGTnf (OR = 5.71) were significant predictors of dementia. Regarding The comparison between PD-nSCD and PD-MCI was significant. c The comparison between PD-SCD and PD-MCI was significant. comprehension tasks, an altered CECSTsimple was the test that was most associated with risk of dementia development (OR = 3.25), although it did not reach statistical significance. The remaining comprehension tasks were not statistically significant either. Considering the results shown in Table 4, logistic regression was conducted to explore whether a specific pattern of linguistic alterations added an increased risk to the development of dementia. The results show that the combination of altered VNT-AGTnf-CESCTsimple (Wald = 8.54; p = .003; OR = 29.33; 95% CI 3.041, 282.904) was associated to an increased risk to dementia development, compared to performance in isolated tasks. PD patients with and without altered VNT-AGTnf-CESCTsimple were compared by digit span backward and the Wisconsin test (categories), to explore de association of linguistic performance with working memory and executive functions. PD patients with an altered performance in the linguistic tasks showed a poor performance in the Wisconsin test (p = .022), but not in digit span (p = .590). Logistic regression analysis showed that Wisconsin categories were a significant predictor of altered VNT-AGTnf-CESCTsimple (Wald = 4.44; p = .035; OR = 2.04; 95% CI 1.051, 3.964) whereas digit span (backward) did not reach statistical significance and was not included in the model (p = .838). Correlation analyses of linguistic tests with Wisconsin test and digit span are included as supplementary material.
In addition, logistic regression was conducted to explore whether the pattern of linguistic impairment in combination with executive resources and other cognitive variables, added an increased risk to the development of dementia. The altered VNT-AGTnf-CESCTsimple, digit span (backward), Wisconsin categories, phonemic and semantic fluency, Stroop test (interference index) and MMSE pentagon copying were included in the regression analysis as independent variables. The forward stepwise method was used to exclude nonsignificant variables. The result revealed that the altered VNT-AGTnf-CESCTsimple was significant as an independent predictor of dementia (Wald = 8.54; p = .003; OR = 29.33; 95% CI 3.041, 282.904). Digit span (p = .655), Wisconsin categories (p = .286), phonemic (p = .732) and semantic fluency (p = .301), Stroop interference (p = .539) and pentagon copying (p = .353), did not reach statistical significance, which were not included in the model and, therefore, did not affect the significance of the VNT-AGTnf-CESCTsimple.
A new logistic regression was conducted to study whether linguistic dysfunction (altered VNT-AGTnf-CESCTsimple) in combination with demographic and clinical factors (age ≥ 65, years of education, Information subtest, PD duration, age at onset of the disease and UPDRS motor score), added an increased risk to the development of dementia. The forward stepwise method was used to exclude nonsignificant variables. The results revealed that the altered VNT-AGTnf-CESCTsimple was significant in step one as an independent predictor of dementia (Wald = 8.29; p = .004; OR = 28.00; 95% CI 2.898, 270.541). The altered VNT-AGTnf-CESCTsimple (Wald = 7.84; p = .005; OR = 33.60; 95% CI 2.871, 393.221) and age ≥ 65 (Wald = 3.65; p = .056; OR = 6.10; 95% CI .954, 38.993) were included in step two. Years of education (p = .915), Information subtest (p = .877), PD duration (p = .844), age at onset of the disease (p = .283) and UPDRS motor score (p = .263) did not reach statistical significance, which were not included in the model and, therefore, did not affect the significance of the VNT-AGTnf-CESCTsimple.
Discussion
The aim of the study was to investigate language performance in patients with PD-SCD and PD-MCI and to explore the clinical value of linguistic impairment as predictors of PDD. PD-MCI patients showed an altered execution in language production, characterized by difficulties in noun and verb naming, as well as an impairment of action generation starting from a noun. As a complementary approach to the results of group comparisons, the study of the percentage of patients who presented a clinically deficient performance showed a high percentage of PD-MCI patients with a clinically deficient execution in these processes (naming nouns 82%, naming actions 64%, generating actions 57%). In addition, the PD-MCI group showed an altered performance in language comprehension, which was observed by the altered execution in the anaphora resolution and comprehension of sentences with different levels of complexity. Interestingly, deficit in comprehension did not only occur in sentences with high complexity that included center-embedded subordinate clauses (77% clinically deficient), but was also observed in declarative sentences without relative clauses, in 68% of PD-MCI patients who presented clinically deficient execution. With respect to PD-SCD patients, the group comparisons showed a deficient performance in language production, which was manifested by an altered execution in generating verbs associated with nouns. Moreover, a high percentage of PD-SCD patients, similar to the PD-MCI group, presented a clinically altered execution in naming actions and generating actions. It is worth mentioning that none of PD patients without SCD showed an altered execution in these tasks.
The above results are of interest because the data about language performance is extremely limited in PD-MCI. Moreover, no previous studies have focused on language execution in PD-SCD patients. The production difficulties observed in PD-MCI patients in word generation and naming is consistent with previous investigations focused on PD patients with MCI (Bocanegra et al., , 2015 and also in studies prior to the MDS criteria for PD-MCI (Bertella et al., 2002;Cotelli et al., 2007; Note. n = number of the sample in each group; PD-nSCD = PD patients without subjective cognitive decline; PD-SCD = PD patients with subjective cognitive decline; PD-MCI = PD patients with mild cognitive impairment; NNT = nouns naming test; VNT = verbs naming test; AGT = action generation test; AGTnf = AGT without a phonologic derived action; AGTf = AGT with a phonologic derived action; APHT = anaphora test; APHTna = APHT nonambiguous; APHTa = APHT ambiguous; CESCT = center-embedded subordinate clauses test; CESCTsimple = CESCT without subordinate clause; CESCTcomplex = CESCT with centerembedded subordinate clause. a The comparison between PD-MCI and PD-SCD was significant. b The comparison between PD-MCI and PD-nSCD was significant. c The comparison between PD-SCD and PD-nSCD was significant. -Ferreiro et al., 2009). Moreover, the specific difficulties in generating verbs associated with nouns observed in PD-SCD is consistent with several previous studies that found a disadvantage for verb production compared to nouns (Crescentini et al., 2008;Péran et al., 2003). On the other hand, the pattern of comprehension impairment, not limited to sentences with high levels of complexity is consistent with previous studies that included PD patients without dementia (Johari et al., 2019), PD-MCI patients (Bocanegra et al., 2015) and different investigations prior to the current PD-MCI criteria (Grossman et al., 1991;Skeel et al., 2001). However, other investigations, prior to the PD-MCI criteria, reported that PD patients showed an altered comprehension of sentences with high complexity, especially with center-embedded subordinate clauses, but not in sentences without subordinate clauses (Grossman, 1999;Grossman et al., 1992;Hochstadt, 2009;Hochstadt et al., 2006). This discrepancy can be explained by different factors. Firstly, it is likely that a significant number of studies, previous to the current PD-MCI criteria, were conducted with heterogeneous samples of PD patients by the inclusion of subjects with and without MCI. Secondly, numerous investigations have explored comprehension with a wide diversity of experimental tasks in which different sentence parameters have been manipulated, including syntactic complexity, semantic content, reversibility or animacy, among others. In the present study, only syntactic complexity was manipulated by the inclusion or not of a center-embedded subordinate clause. Semantic content was the same for both sentence types of CESCT. The same occurred with reversibility, which are sentences where the action is equally likely to be performed by both characters involved. Regarding animacy, in the simple and complex sentences of CESCT both characters are animate entities (e.g. humans, animals), and as such are more likely to perform actions compared to inanimate entities (e.g. objects) which are more likely to be the object of actions. Thus, the simple sentences of the CESCT can be considered as "more complex" than the simple sentences in some previous studies because these sentence parameters did not facilitate the comprehension.
Rodríguez
The results of the group comparisons, combined with clinically deficient execution, are of much relevance to clarify the timing and order of appearance of language impairments. Taken together, these results suggest that PD-SCD language performance is characterized by a specific deficit for action words, accompanied by possible difficulties in sentence comprehension (around 40% of PD-nSCD and PD-SCD showed an altered execution in CESCT). The progression of cognitive impairment, characterized by the affectation of different cognitive domain and PD-MCI diagnosis, is associated with a greater impairment of language domain significantly affecting production (nouns and verbs) and comprehension of sentences with different levels of syntactic complexity. These results are consistent with recent investigations which reported that PD patients without MCI showed a selective difficulty for action verbs compared to nouns , accompanied by difficulties in sentence comprehension (Bocanegra et al., 2015). PD-MCI diagnosis was associated with a more generalized language impairment (Bocanegra et al., , 2015. Another recent study focused on asymptomatic PD mutation carriers, that is, individuals unaffected by PD but with mutations in PARK2 or LRRK2. The preclinical PD sample showed deficits in sentence comprehension in the absence of other linguistic or executive difficulties . This result reinforces the assumption that deficit in language comprehension can be present even in the early stages of the disease. A second objective of the present investigation was to study the clinical value of linguistic impairment as predictors of PDD development. The data reported in the present investigation shows that impairment in action naming (OR = 12.00) and action generation Note. n = number of the sample in each group; PDD = PD patients with dementia in the follow-up study; PDND = PD patients without dementia in the follow-up study; OR = Odds Ratio; CI = Confidence Interval. NNT = nouns naming test; VNT = verbs naming test; AGT = action generation test; AGTnf = AGT without a phonologic derived action; AGTf = AGT with a phonologic derived action; APHT = anaphora test; APHTna = APHT nonambiguous; APHTa = APHT ambiguous; CESCT = center-embedded subordinate clauses test; CESCTsimple = CESCT without subordinate clause; CESCTcomplex = CESCT with center-embedded subordinate clause.
(OR = 5.71) was related to a greater risk of PDD development. Alteration in comprehension of simple declarative sentences also was associated with an increased risk of dementia (OR = 3.25), although this did not reach statistical significance. Interestingly, PD patients who were deficient in action words (action naming and action generation) and sentence comprehension exhibited a high risk of PDD development (OR = 29.33), which was greater than the risk associated with only the presence of action naming difficulties. Different cognitive functions have been associated with an increased risk of dementia. Demographic (older age, education) and clinical factors (age at onset, years since diagnosis, motor symptoms) have also been recognized as variables associated with the evolution of cognitive impairment (Marinus et al., 2018). Moreover, different investigations have associated language difficulties in PD with executive deficit. Thus, an important question is whether language impairment can be considered a more useful predictor of dementia, compared to the above mentioned demographic and clinical factors, as well as other cognitive measures. The result of the regression model showed that the combination of deficits for action words (action naming and action generation) and sentence comprehension was as a significant predictor of dementia, whereas the remaining cognitive tests did not reach statistical significance. Thus, the present results, although preliminary because of the sample size, suggest that the pattern of linguistic dysfunction can be considered as a useful predictor of dementia. As expected, age ≥65 also contributed significantly to the regression model (Marinus et al., 2018). The important role of the executive functions in other cognitive process is well known. However, in the authors' opinion, the specific implication of executive functions on the interpretation of an evolution pattern of language production/comprehension difficulties in PD is still unclear. The results of the present investigation are consistent with previous studies (Skeel et al., 2001), and are reinforced by recent investigations with PD-MCI patients (Bocanegra et al., , 2015 and a preclinical PD sample which was deficient in linguistic functions in the absence of executive difficulties . The language domain includes a set of complex behaviors that involves several processes related to peri-Sylvian and extra-Sylvian cerebral areas. Current knowledge regarding brain functioning suggests that different categories of content would be represented in different regions of the brain depending on the sensory and motor processes involved in the acquisition of these contents (Goldberg et al., 2006). Semantic representations of action words would be supported by regions that are directly involved in motor planning and execution (i.e. primary motor cortex, premotor cortex), whereas the nouns would be represented in posterior cortical areas (i.e. perceptual/sensory regions) (Auclair-Ouellet et al., 2017). PD is characterized by the loss of dopaminergic cells in the substantia nigra, the interruption of the frontal-striatal-thalamic anatomic loop and the consequent deterioration of motor control. This is a possible explanation of the early difficulties in action words reported in previous studies (Bocanegra et al., , 2015 and observed in the subsample here of PD-SCD. However, it is now widely recognized that PD evolves into a multi-system disorder that extends beyond the substantia nigra pars compacta, affecting frontal and temporo-parietal cortical areas, as well as subcortical regions (Foffani & Obeso, 2018). The dual syndrome hypothesis, differentiates between the following two cognitive syndromes in PD patients: (1) the fronto-striatal, which is associated with an executive dysfunction profile and dopamine depletion; and (2) the posterior cortically based cognitive profile, characterized by dysfunction in language and visuospatial functions, which is linked to nondopaminergic neurotransmitters, and which is associated with an increased risk of dementia (Williams-Gray et al., 2009. In line with these results, recent investigations showed that the posterior cortical PD-MCI subtype, characterized by visuospatial, language (assessed by the Boston naming test) or memory deficit, was associated with more extensive structural alterations (i.e. caudate nuclei, thalamus, hippocampus and several white matter tracts) and increased basal ganglia intra-network functional connectivity, which could be interpreted as a neurodegeneration compensatory mechanism . The results here are consistent with the dual syndrome hypothesis by showing that PD patients with a pattern of linguistic impairment including deficit in sentence comprehension have a high risk of developing dementia.
Certain limitations of the present study need to be acknowledged. The sample size is relatively small, especially in the PD-nSCD group. The number of participants has limited the methodological approach, especially regarding studying the relationship between different cognitive domains in greater detail. Moreover, although the design of the language instruments was based on the evidence in scientific literature, a previous validation study is not available. Future longitudinal investigations with larger samples and with the inclusion of biomarkers (e.g. neuroimages) could confirm these findings, with special attention being paid to compare the predictive value of linguistic dysfunctions with that of other cognitive domains.
In summary, the present investigation is the first to conduct a comprehensive assessment of linguistic functions in a sample of PD patients with SCD and MCI, and is also the first to study the clinical value of the linguistic impairment as a risk factor of PDD development in a follow-up study. PD-SCD subjects showed a difficulty for action words, which was not observed in PD patients without SCD. PD-MCI diagnosis was associated with a greater impairment of language domain significantly affecting the production of nouns and verbs, as well as the comprehension of sentences with different levels of syntactic complexity. Finally, the coexistence of deficits for action words (action naming and action generation) and sentence comprehension in PD patients can be considered a useful predictor of PDD development. In the authors' opinion the results of the present investigation are of much value for researchers and also for clinicians. Approximately eight out of ten PD patients will develop dementia after 20 years (Hely et al., 2008), which has a marked effect on the quality of life of patients and caregivers, with a great societal and financial impact (Leroi et al., 2012). The results, although exploratory, suggest that specific patterns of linguistic dysfunctions, that can be present even in the early stages of the disease, can predict future dementia, reinforcing the importance to advance the knowledge of linguistic dysfunctions in predementia stages of PD. These results are high applicable considering that it would not be difficult to incorporate these types of instruments, which are generally brief and easy to apply and interpret, in daily clinical practice. | 2022-10-14T06:17:15.872Z | 2022-10-13T00:00:00.000 | {
"year": 2022,
"sha1": "a2e99553ab3820b84d49523677355d3f89680c8b",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/0E926A9B4B998726537FAFEF86D10117/S1355617722000571a.pdf/div-class-title-specific-pattern-of-linguistic-impairment-in-parkinson-s-disease-patients-with-subjective-cognitive-decline-and-mild-cognitive-impairment-predicts-dementia-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "ddc7869423fd658992dc5b773c243e4457c03fde",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246702481 | pes2o/s2orc | v3-fos-license | Integrated testing for TB and COVID-19
Integrated testing for TB and COVID-19 may help find those TB patients who are not accessing care in the context of the COVID-19 pandemic. Some molecular platforms with assays for both diseases are already commercially available; however, integrated testing approaches need to be systematically evaluated to ensure their appropriate implementation.
Integrated testing for TB and COVID-19 may help find those TB patients who are not accessing care in the context of the COVID-19 pandemic. Some molecular platforms with assays for both diseases are already commercially available; however, integrated testing approaches need to be systematically evaluated to ensure their appropriate implementation.
The COVID-19 pandemic is wreaking havoc on all realms of global health and tuberculosis (TB) care and services are no exception. 1,2 After one year of the pandemic, high TB burden countries were reporting drops in TB case notifications ranging from 16% to 41% (mean 23%)-levels not seen since 2008 (https://www.stoptb.org/news/ 12-months-of-covid-19-eliminated-12years-of-progress-global-fight-againsttuberculosis). Of the approximately 10 million people who developed TB in 2020, only 5.8 million people were diagnosed and reported to national TB programs, an 18% decrease compared to 2019. 1 Reduced access to medical care and TB services as well as the reallocation of existing public health tools, personnel, and infrastructure to COVID-19 efforts could explain this reduction. 3 Predictably, the decline in people seeking TB care has led to an increase in TB-related deaths: in 2020, at least 1.5 million people died of TB, a figure not seen since 2017. Evidently, we are off track to meet many of the End TB goals, and without redoubled efforts and innovative strategies to reach those who need testing and treatment, we risk missing these milestones by even larger margins. 1 Additionally, emerging data suggest that people with TB who develop COVID-19 are at a higher risk of severe disease and mortality compared to those without COVID-19. As well, a recent systematic review reported that COVID-19 patients with TB were at almost twice the risk of mortality compared to individuals with COVID-19 alone. 4 Therefore, reaching people with TB and ensuring they receive appropriate care is urgent. Increasing TB testing and case notifications is a critical and pressing priority.
Proposed action: ''Integrated testing'' More people with presumptive TB need to be reaching the second step of the TB care cascade; namely, moving from (1) developing incident TB to (2) accessing testing. 5 One proposed catch-up intervention is integrating testing for TB and COVID-19. In the absence of World Health Organization policy, other global stakeholders have proposed frameworks and guidelines for integrated testing, although evidence to support these recommendations is limited.
In early 2021, the US Agency for International Development (USAID) and Stop TB partnership issued a brief document recommending that for people presenting to healthcare facilities with respiratory symptoms, a ''simultaneous, integrated approach to testing for TB and COVID-19 should be implemented in countries with a high burden of TB,'' namely, ''diagnostic tests for both COVID-19 and TB should be done at the same time (simultaneous testing) on a multiplex testing platform (integrated testing).'' 6 These recommendations are broad and apply to anyone with presumptive TB or COVID-19.
In late 2021, the Global Fund released a briefing note providing guidance regarding testing for TB and COVID-19. In particular, they recommend that in communities with prevalent TB and COVID-19, people whose clinical signs and symptoms meet case definitions for both TB and COVID-19 should undergo ''systematic testing for both pathogens.'' They also recommend molecular testing in those who have had symptoms for longer than 7 days, and antigen rapid testing for those with symptoms lasting 5-7 days (Figure 1). 7 The Global Fund brief also suggests action in case people with TB are suspected to have developed COVID-19 symptoms. Should this situation arise, TB must continue to be properly managed, and then testing for SARS-CoV-2 infection should be performed if individuals ''meet the COVID-19 case definition or when there is persistence or worsening of their condition despite appropriate treatment for the specific form of TB.'' 7 Certain countries have launched their own integrated testing efforts. In India, for example, this approach is termed bi-directional screening for TB and COVID-19. The Indian Ministry of Health and Family Welfare recommends that ''COVID screening for all diagnosed TB patients and TB screening for all COVID positive patients should be conducted,'' TB screening should be undertaken in all individuals with influenza-like illness, and all individuals with severe acute respiratory illness should be screened for TB. 8 Recently, the ministry also issued guidance that all individuals undergoing COVID-19 treatment with a cough that lasts longer than 2 weeks should be tested for TB. 9 The differences in these approaches demonstrate that there is not yet a clear consensus of who exactly should be prioritized: the target population for integrated testing needs to be more well-defined. The Global Fund's testing algorithm ( Figure 1) is narrower than the catch-all approach proposed by Stop TB Partnership and USAID. However, the two documents ultimately may not lead to much difference in practice, as discerning between the two diseases based on clinical picture alone can be difficult. Also, because COVID-19 has been impacting TB-endemic countries in waves, the need for integrated testing changes over time. When COVID-19 incidence rates are low, it is not completely obvious why TB programs would also need testing for COVID-19. And while this approach may assist in finding cases, the yield of integrated testing and its cost-effectiveness is unclear.
Possible testing platforms Despite this ambiguity, there are some products already available that may be utilized for integrated testing. As countries have greatly scaled up their testing capabilities to deal with the large-scale demand for COVID-19 testing, this infrastructure could be expanded to include more testing for TB. In some cases, molecular testing for COVID-19 was originally possible due to existing laboratory infrastructure and diagnos-tics networks used by national TB and HIV programs. 10 Single cartridges that can detect multiple pathogens as well as non-PCR-based platforms for multidisease testing are becoming commercially available. The work to create ''fast follower'' assays has begun, with some TB tests that utilize the same platform or technical basis as novel COVID-19 tests under development. Some products that may be considered for integrated TB and COVID-19 testing are shown in Table 1, including GeneXpert, a technology that has been scaled-up in many high TB burden countries ( Figure 1).
While all the molecular platforms in Table 1 require sputum samples for TB testing, the conventional sample for COVID-19 testing is a nasopharyngeal (NP) swab. This poses challenges for performing integrated testing with a single clinical sample, and data are needed to show whether sputum could be a viable and useful sample for COVID-19 testing. In high TB burden countries, healthcare workers have experience in coaching patients to produce sputum samples and within the context of pandemic-related medical equipment stock-outs, specimen types that do not require specialized equipment are appealing.
The massive investment into research and development of COVID-19 diagnostics will hopefully be leveraged to produce a variety of new diagnostic tests for TB, a chronically underfunded area. Consider the advances in molecular tests that have made some COVID-19 tests available at lower levels of healthcare systems, including the point-of-care and even self-testing athome, or the development of tests that can run on a broad variety of samples. Tests incorporating either of these user-friendly advances have the potential to reach people in remote areas and may make obtaining good quality samples easier from people with all forms of presumptive TB. 10 Supporting evidence for integrated testing? Evidence describing the disruptions to TB programs and services since the beginning of the COVID-19 pandemic is accumulating, 3,4 and although it is well-recognized that there has been a global decline in TB testing, 3 reports of interventions to recover these losses are very limited. This is also true for integrated TB and COVID-19 testing.
One study in Madagascar reported that the Ministry of Public Health had decided to use existing GeneXpert platforms for COVID-19 testing, which were in place for TB testing. The authors noted that automated platforms like GeneXpert require less trained staff than traditional PCR due to the decreased number of hands-on steps, and because the network was already existing, the country could quickly take advantage of its services. 11 In high TB burden settings like South Africa, it has been suggested that community-based screening networks deployed for COVID-19 could also support TB testing and lead to improved linkage to care. For individuals who test positive for TB, there is an opportunity to test for COVID-19 at the time of TB contact tracing. 12 A publication of public health efforts in Kerala, India, has shown that systematic integration of COVID-19 and TB testing is possible. 13 When health authorities noted that their capacity was entirely being directed to COVID-19 control efforts, they adopted strategies to incorporate those efforts with other disease programs' work. Anyone who was eligible for COVID-19 testing was screened for TB if they met any of the four conditions: (1) presence of influenza-like illness in an individual with risk factors for developing TB (e.g., close contact, elderly, living with diabetes), (2) individuals testing negative for COVID-19 but whose symptoms lasted for >14 days, (3) any individual hospitalized for due to COVID-19, or (4) positive for COVID-19 and foursymptom (cough >2 weeks, fever >2 weeks, weight loss, night sweats) screen positive for TB. Those who screened positive were offered molecular TB testing using new and existing Truenat and GeneXpert platforms, with tests for TB and COVID-19 run on both systems. The authors reported that their integrated testing efforts comprised 8% of total TB diagnoses made state-wide in a 1-month period. 13 Currently, we are recruiting adults with presumptive TB or COVID-19 in Lima, Peru, to investigate integrated TB and COVID-19 testing using the Gene-Xpert platform. 14 Each participant is providing us with a sputum sample and an NP swab, which are then tested for both TB and COVID-19. So far, we have observed that with a single sputum sample, we can identify 98% of culture-confirmed TB cases and 84% of RT-PCR-confirmed COVID-19 cases. Our in-study prevalence of concurrent TB and COVID-19 is around 2%, which does raise some questions about the efficiency and cost-effectiveness of this approach on a large scale. 14 ll Further answers are needed Integrated testing for TB and COVID-19 has the potential to help recover a proportion of the missing people who developed TB in the pandemic era, but many important unknowns remain regarding its implementation. Implementing integrated testing within existing laboratory and specimen collection workflows is likely feasible, but patient acceptability of introducing testing for COVID-19 and TB, a disease that is typically accompanied by substantial social stigma, is not known.
Evidence is urgently needed to understand who exactly should be tested for both TB and COVID-19. Considering that COVID-19 is typified by acute but short-term symptoms, while TB is characterized by longer-term clinical symptoms, who is the ideal target population for integrated testing? As resources are not unlimited, an efficient strategy that can identify cases at a relatively high rate would be desirable. Certain subpopulations, such as immunocompromised people, will need particular attention to ensure integrated testing policies reach them. Evaluations of integrated testing in large populations with diverse epidemiologic history, clinical symptoms, and varied symptom duration distributions will help draw conclusions regarding who should undergo integrated testing.
Examining integrated testing in a variety of settings is needed to understand where this intervention should immediately be rolled-out. Example settings include urban locales with very high TB prevalence where there are many shared risk factors for COVID-19; rural settings in high TB burden countries where integrated disease testing could greatly improve quality of care; or countries with a higher proportion of people who are immunocompromised. It is also important to account for the fact that COVID-19 has been impacting countries in waves: in countries with high TB prevalence, integrated testing may therefore be irrelevant when COVID-19 incidence is low, but may again become highly relevant during COVID-19 surges. Good surveillance for both infections is necessary to make adjustments in testing policies.
Against the background of global supply chain issues, further investigation into alternative but acceptable sampling strategies is warranted. Although NP swabs are considered the gold standard for COVID-19 molecular testing, other samples may yield a sufficiently high proportion of cases. Other sample types such as tongue swabs, nasal swabs, throat swabs, aerosol collection in face masks, or conventional sputum could be candidates for testing of both conditions, and strategies like sample batching and at-home sample collection should be considered. More work is needed to understand these approaches in the context of integrated testing.
Modeling studies to understand the cost and cost-effectiveness of integrated TB and COVID-19 testing will help aid decision making and setting priorities. Simulations should be run for different iterations of testing strategies, target populations, incidence triggers, and settings. Despite the calls for integrated disease testing, it is not yet known whether the yield will be worth the additional costs. Molecular testing for Covid-19 is expensive for many low-and middle-income countries. Adding a second molecular test for TB for all people with respiratory symptoms might be prohibitively expensive in many settings unless the strategy is worth the effort and costs. Understanding this piece will help countries plan and allocate resources, as well as inform future policies.
Concluding remarks
Ostensibly, integrated testing for TB and COVID-19 seems to be a beneficial intervention, but it has yet to be systematically evaluated and data are lacking. Available multiplex platforms can facili-tate this testing, with more products emerging. As new SARS-CoV-2 variants of concern arise, integrated testing policies will need to be re-examined and updated as needed. Careful design of studies and outcomes will need to be considered to ensure the real impact of integrated testing can be identified and measured. M.R. is an employee of FIND. FIND is a not-for-profit foundation whose mission is to find diagnostic solutions to overcome diseases of poverty in low-and middle-income countries (LMICs). It works closely with the private and public sectors and receives funding from some of its industry partners. It has organizational firewalls to protect it against any undue influences in its work or the publication of its findings. All industry partnerships are subject to review by an independent scientific advisory committee or another independent review body, based on due diligence, TTPs and public sector requirements. FIND catalyzes product development, leads evaluations, takes positions, and accelerates access to tools identified as serving its mission. It provides indirect support to industry (e.g., access to open specimen banks, a clinical trial platform, technical support, expertise, laboratory capacity strengthening in LMICs) to facilitate the development and use of products in these areas. FIND also supports the evaluation of publicly prioritized TB assays and the implementation of WHO-approved (guidance and performance qualification [PQ]) assays using donor grants. In order to carry out test evaluations, FIND has product evaluation agreements with several private sector companies for TB and other diseases, which strictly defines its independence and neutrality vis-à-vis the companies whose products get evaluated and describes roles and responsibilities. | 2022-02-11T14:12:36.366Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "16da1d148669739f41472473608c91ec62c07106",
"oa_license": null,
"oa_url": "http://www.cell.com/article/S266663402200085X/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "37cc9b3200f9329223e8b33636a44e3ac100f75d",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253197817 | pes2o/s2orc | v3-fos-license | Prevalence of the relative age effect among high-performance, university student-athletes, versus an age-matched student cohort
Background Relative age effect (RAE) refers to the over-representation of athletes born earlier in the calendar year covering a specific sport. The RAE is especially prevalent in youth sports but often persists into senior competitive levels. Objectives To determine the prevalence and magnitude of the RAE among student-athletes in a high performance (HP) programme at a South African university, according to year, sports code and sex, compared to the general student cohort. Methods Cross-sectional descriptive analysis of HP-student-athletes and an age-matched student cohort from 2016 to 2021. Birthdate data were extracted for the HP student-athletes (N = 950: men = 644, women = 306) and student comparison group (N = 47 068; men = 20 464; women = 26 591; not disclosed = 13). Differences were determined using Chi-squared and Fisher’s exact test. Residuals examined relative age quartile differences. The steps were applied across academic years, sport code and sex Results The RAE was more pronounced among the student-athletes compared to the age-matched student cohort. The RAE was occasionally observed among the HP-student-athletes; however, the prevalence was inconsistent across the respective years under investigation and only noted in certain sport codes (i.e. swimming, rugby union and cricket). There were no sex differences among the HP student-athletes. Conclusion Where the RAE was noted, the selection bias favoured the relatively older student-athletes. The mechanisms for RAE are multifactorial and complex. A combination of factors, such as competition depth, the popularity and physicality of a sport and socialisation may be involved.
on the timeline and impact the RAE makes when prevalent in youth sports, and may provide insight into the selection and participation patterns of university student-athletes. The researchers hypothesised that the RAE will be prevalent and that there would be a bias towards relatively older studentathletes being selected for high-performance opportunities.
Methods
Ethical approval was received from the SU Research Ethics Committee for Social, Behavioural and Educational Research (REC: SBE project number: 21919), and institutional permission was granted by the Division for Information Governance (IG-2166). Since the data did not contain identifiable information, informed consent was not required. The study was conducted according to the Declaration of Helsinki.
We included date of birth data of South African SU students aged 18 to 25 years from 2016 to 2021. The study was delimited to the last six years for which complete data sets exist for the student-athletes. This is when Maties Sport started the induction, monitoring, and tracking of their HPstudent-athletes. Since the HP programmes focus on Varsity Sport/Varsity Cup sporting codes, 25 years was set as the maximum age for the student-athletes, coinciding with the competition age limit. Non-South-Africans were excluded to ensure that all participants were subject to the same cutoff date (1 January) used for age-group categorisation.
All 128 230 data records were analysed in RStudio. To ensure that the participants in each academic year were unique and included once only, the analysis was restricted to the new intake students for each year (N = 48 018). The data was divided into two groups: (1) general student cohort (N = 47 068; men = 20 464; women = 26 591; not disclosed = 13), and (2) HP-student-athletes (N = 950; men = 644; women = 306). The student-athletes consisted of 11 HP sport codes: Athletics = 90; Basketball = 67; Cricket = 77; Cycling = 33; Field hockey = 133; Netball = 67; Rugby union = 260; Soccer = 95; Swimming = 72; Tennis = 40; Water polo = 16). It was essential to impose a comparison cluster of aged-matched general students to assess whether the RAE was prevalent in the general student population or whether the phenomenon is sport-specific.
Starting with January, all participants were grouped into quartiles (Q1: January to March, Q2: April to June, Q3: July to September, Q4: October to December). The Chi-Square goodness of fit test was used to test differences in birth quartile frequencies of the full student population against a theoretical expected distribution, a day-corrected quartile distribution (Q1 = 24.7%, Q2 = 24.9%, Q3 = 25.2%, Q4 = 25.2%). [3] Compared to a uniform distribution (25% per quartile), the day-corrected distribution accounts for the varying number of days per month. [3] A series of Chi-squared tests of independence (χ2) was used to test differences in birth quartile frequencies of the HP-student-athletes against the general student cohort according to year and sex. Fisher's exact tests were used to assess significant differences in birth quartile frequencies according to sport code.
For all analyses, a p-value of <0.05 was the criterion for a Table 1 contains the between-group comparison results per sex (i.e., HP-student-athlete men/women versus generalstudent-cohort men/women) and year. Table 2 reports the Fisher's exact test results and residuals for each sport code. Figure 1 depicts the group differences in birthdate distribution for the HP-student-athletes and the general student cohort for each year. Figure 2 graphically illustrates the birth quartile between-sex differences among the HP-student-athletes. Figure 3 revealed the between-sex comparisons for the generalstudent-cohort as well as the eight sport codes that comprised men and women participants. Birth quartile graphs for netball (women players only) and for rugby and cricket (men only) complete the figure.
The birthdate distribution of the HP-student-athletes differed from the student cohort. There were no RAEs in 2016 and 2018, despite consistent Q1 and Q2 over-representation. RAEs were more prevalent among the men compared to the women student-athletes. Between-sex differences (medium effect) were noticeable in 2016 only. A 'spike' was noted during Q2 in 2017, before normalising again in 2018. From 2019 onwards, the relative distribution in Q1 and Q2 HP-student-athletes increased for both sexes. Among the women student-athletes, a substantial increase was evident in those born during Q2 over the six years. There were no between-sex differences when men and women student-athletes from the same sport code were compared.
Discussion
To the best of our knowledge, this is the first study to investigate the prevalence and magnitude of the RAE among South African university student-athletes. The RAE was more pronounced among the student-athlete sample compared to the age-matched student cohort. Interestingly, the birth distribution of the student population was slightly skewed towards relatively older students. The prevalence of the RAE among the student population, albeit small, would extend to maxi the HP-student-athlete sample, increasing the likelihood that more Q1 and Q2-born athletes would be competing at this level.
The RAE was only occasionally observed among the HP student-athletes, but the prevalence was inconsistent across the respective years and sports codes. It was more commonly not prevalent than prevalent when the respective subgroups were compared. The HP student-athletes' birthdate distribution differed significantly from that of the general student cohort, respectively for two (women sample) and three (men sample) of the six years under investigation. The RAE was only prevalent in three of the sport codes: swimming, cricket, and rugby, and there were no sex differences among the HP student-athletes. The actions of different social agents and contextual factors (e.g. developmental pathway, the level of competitiveness and sports popularity) may have contributed to these sport-specific findings. [4,12] The initial selection bias may have merely perpetuated over time. [5] By doing better, relatively older athletes probably received more rewards for their accomplishments, leading to greater psychosocial
Fig. 3. Distribution of birth quarter for each sport code by sex. Dotted line at 25% indicates reference for uniform distribution.
investment and a better prospect of retaining their participation status, resulting in the RAEs still being prevalent at university sports levels. [4,6] The findings for swimming (i.e. more Q2 than Q1-born swimmers) are atypical and may be attributed to a higher likelihood of an unusual distribution by mere chance, due to the small sample size. Still, there was a bias towards swimmers born in the first half of the year.
Structural changes to the South African first-class cricket competition (i.e. cutting the 11 professional teams to six franchises, thereby reducing the viable development pathways) may have contributed to the RAE observed among the cricket players. [13] Once relatively younger participants deviate from the traditional player pathway, they might find themselves in a development and learning environment against weaker competition and restricted opportunities for progression. [13,14] The RAE observed in this study resonates with previous high school [15] and senior [6] rugby union research in South Africa, thereby adding more information about the pathway to professional rugby. Rugby union is a strong candidate for RAE prevalence based on high physicality (task constraint), cultural relevance and popularity (environmental constraints). [2,6] A residual bias may accumulate from being selected early in the process. Subsequently, fewer relatively younger players may come through the tertiary education pathway. [13] It is difficult to explain the absence of the RAE in soccer and basketball, considering the consistent prevalence reported in these sports in other contexts. [12,16] If students only take up sport later in life, i.e. post-puberty, there could be fewer development variations (e.g. weight, height) when they reach the university sports level. This may reduce physical selection biases and the prevalence of the RAE. Additionally, in comparison to rugby union and cricket, few soccer and basketball players use university sports as a springboard to elite sports. Sports like basketball and soccer may adopt a more flexible approach, where university coaches tend to accept almost any student who wants to be part of the programme, which encourages more students to join these programmes regardless of their initial skill and/or experience, thereby possibly moderating the RAE.
The data showed that only 2% of the student population was part of the HP-student-athlete programme. Women were under-represented in the HP-student-athlete cohort (32% women vs 68% men). This is concerning, considering there were more women than men students (approximately 57% vs 43%, respectively), and raises questions about the underrepresentation of women in university sports. The differential distributions observed in women HP-student-athletes could be explained by socialisation or a self-restriction process. The "gender inappropriate" stigma attached to the female sport may have weakened results, allowing the relatively older woman and relatively younger student-athletes to continue their participation.
Psychological perspectives embrace the notion of the selffulfilling prophecy, i.e. the greater the expectation (selfexpectations, coach or parent expectations) placed on the player, the greater the achievement result. [5] Studies revealed that coaches held greater expectations of participants born in the first quarter (Q1) of the year than those born in the fourth quarter (Q4). [17] The support provided to athletes during key developmental periods and the developmental experiences created during practice sessions and matches influence their transition and progression. [1,18] Hence, to limit the possible negative consequences of the RAE, swimming, rugby, and cricket administrators should offer diverse solutions to benefit all participants during different participation and development phases.
Talent selection programmes should incorporate a broad range of selection criteria including objective assessments of physical attributes, technical skills, and psychosocial characteristics. Considering that relatively younger players can still reach top-level senior sports, practitioners should consider the delayed development trajectories of some of the young participants and support participants as they transition from high school to university teams. This support is needed both before and once they arrive at university.
Although several solutions have been proposed for youth sports, [19] few have been implemented successfully or tested empirically. Whilst raising awareness is important to address the RAE, it is likely to be insufficient. Moreover, it would be naive to enforce any of the earlier practical recommendations as solutions to reduce this phenomenon because of the absence of direct evidence that their application will reduce the effect. Furthermore, the current findings were limited to information on date of birth, sports codes and sex. It may be too late to impel such interventions at the university sport participation level. Focusing on developing a broader understanding of the processes influencing early and late developing studentathletes may be more appropriate.
Conclusion
A small RAE was observed among the general student cohort. Analyses of the subgroups revealed inconsistent annual variations among the HP-student-athletes. The RAE was further confined to swimming, cricket, and rugby only, and there were no sex differences in the HP-student-athlete cohort. The observed RAE exemplifies a social inequality that inhibits the prospect of immediate and long-term participation in university HP sport. Even though South African studentathletes are seldom professionals, equal opportunities should be given to everyone to become an HP student-athlete, regardless of date of birth. Even if this bias is unintended, it should be prudently assessed, given the rewarding nature of some sport codes (e.g. access to high-quality resources, television coverage, recognition, financial and academic support). The prevalence of the RAE in these sports may point toward underlying mechanisms and problems with the talent identification, selection, and youth sport development initiatives.
A limitation to this study is the small sample size (especially when split into sport codes). Whilst the present study is representative of student-athletes from a South African university and provides information on the general prevalence of the RAE at this competitive level, it is not comprehensive. Findings from this study are therefore context-specific and should not be generalised to studies from other universities or countries.
Future studies should examine the mechanisms responsible for the prevalence of RAE or the lack thereof at various participation levels (e.g. primary school, high school, and sports academies). Though not examined in this study, it is reasonable to assume a degree of interaction among various constraints. Various individual physical abilities and psychological skills, tasks (playing position, participation level and physicality of the sport), and environmental constraints (popularity of the sport, coach and family influence, sport-code rules, and policies) should be considered and measured explicitly to gain a better understanding of their association with the RAE. Our understanding of these interactions remains limited. Studies may also benefit from triangulating findings from qualitative and quantitative sources and should utilise a sound theoretical framework, such as the Athletic Talent Development Environment model. [20] | 2022-10-29T15:10:36.375Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "9c88bbb0939f59489bedc72c05193979faac7691",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b2d3771f08c6788baea0c89145d7ac596d05d93d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221860351 | pes2o/s2orc | v3-fos-license | B chromosomes of multiple species have intense evolutionary dynamics and accumulated genes related to important biological processes
Background One of the biggest challenges in chromosome biology is to understand the occurrence and complex genetics of the extra, non-essential karyotype elements, commonly known as supernumerary or B chromosomes (Bs). The non-Mendelian inheritance and non-pairing abilities of B chromosomes make them an interesting model for genomics studies, thus bringing to bear different questions about their genetic composition, evolutionary survival, maintenance and functional role inside the cell. This study uncovers these phenomena in multiple species that we considered as representative organisms of both vertebrate and invertebrate models for B chromosome analysis. Results We sequenced the genomes of three animal species including two fishes Astyanax mexicanus and Astyanax correntinus, and a grasshopper Abracris flavolineata, each with and without Bs, and identified their B-localized genes and repeat contents. We detected unique sequences occurring exclusively on Bs and discovered various evolutionary patterns of genomic rearrangements associated to Bs. In situ hybridization and quantitative polymerase chain reactions further validated our genomic approach confirming detection of sequences on Bs. The functional annotation of B sequences showed that the B chromosome comprises regions of gene fragments, novel genes, and intact genes, which encode a diverse set of functions related to important biological processes such as metabolism, morphogenesis, reproduction, transposition, recombination, cell cycle and chromosomes functions which might be important for their evolutionary success. Conclusions This study reveals the genomic structure, composition and function of Bs, which provide new insights for theories of B chromosome evolution. The selfish behavior of Bs seems to be favored by gained genes/sequences.
Background B chromosomes (Bs) are additional and non-essential extra chromosomes, which show non-Mendelian inheritance and lack the ability of meiotic pairing unlike the normal A chromosomes [1,2]. The genomic characterization of Bs has remained elusive, since the discovery of Bs [3] about 112 years ago. It is supposed that around 15% of eukaryotic species contain Bs [4], including animals, plants and fungi reported thus far [5], but several questions about their evolutionary origin and function remain unanswered. At first, Bs were presumed neutral or genetically inert elements of the genome, but later it was found that Bs may have either a detrimental or beneficial role (reviewed in Ahmad and Martins [2]). Bs have been found to decrease fertility in maize [6,7], whereas in extreme situations they may also reduce the genome fitness and eliminate all paternal chromosomes in the wasp Nasonia vitripennis [8], and can contain protein coding genes [9,10]. Although the classical view of Bs as a selfish element has been evoked in many cases, there are relatively few studies that have reported any detrimental effect associated with Bs. Recent studies have provided the new perspective that Bs are not genetically silent, but rather, can carry transcriptionally active copies of rDNA sequences [11] and protein-coding gene [12].
B chromosomes have been reported in 744 animal species including insects, mammals and fishes (http://www. bchrom.csic.es [5]). In addition to other vertebrates with Bs, Astyanax fish have emerged as an exciting model for B chromosome research. This genus has been extensively investigated for chromosomal analysis because of the high prevalence of polymorphisms including diverse B chromosomes morphotypes, which have been found in 14 species [13][14][15][16]. One of these species, A. correntinus contains 36 chromosomes [17] and includes the presence of a macro B chromosome (unpublished data). Another Astyanax species with Bs is A. mexicanus, commonly recognized as blind cavefish or Mexican tetra that inhabit cave regions, present troglomorphic traces and is an attractive model of evolutionary biology and development studies [18,19]. The cytogenetics characterization of the blind cavefish revealed a karyotype comprising 2n = 50 chromosomes with 1 or 2 microBs [20]. Although Bs have been extensively investigated in Astyanax, including genomic studies, the analysis were mostly focused on cytological observations and also in the investigation of repeated DNAs [21]. However, no study has presented deeper genomic view of the B gene content and therefore analysis is required to reveal the genomic contents and understand B chromosome biology of this organism.
Among insects, grasshoppers (Orthoptera) are another interesting group, in which B chromosomes were investigated in a huge number of species [22]. The grasshopper Abracris flavolineata had been previously investigated using cytogenetics methods and contains 2n = 23, X0 (males) sex chromosomes and one or two B chromosomes [23]. Over the years, information has been accumulating for grasshoppers regarding B chromosome population dynamics and their possible origin. However, the knowledge obtained about the molecular composition of B chromosomes in this group of species, focus on the characterization of B repetitive genomic content [24][25][26].
The cytogenetic analysis of B specific sequences using fluorescence in situ hybridization (FISH), including most of the repetitive DNA types such as dispersed and tandem repeats, rDNA sequences and histones genes remained a primary interest of most studies during the decades of 1990-2010 [27][28][29][30][31][32][33]. Furthermore, the genetic composition of isolated Bs in different species was facilitated by flow-sorting and micro-dissection techniques [34]. However, these techniques provide limited material and thus do not fully reveal the relationship of homologous sequences between A and B chromosomes and the complete gene content of Bs. During the last decade, next generation sequencing (NGS) technologies have substantially elevated B chromosome research into a new era of "B-omics" [2]. The multi-omics revolution has offered new opportunities to resolve the classical limitations of cytogenetics analyses. The pioneer study that applied NGS analysis to the study of B chromosome content concluded that the rye B is enriched in pseudogenes as well as different repeat elements [35]. Similarly, a comprehensive genomic analysis of the B chromosome in the cichlid fish, Astatotilapia latifasciata discovery that the B comprises of thousands of fragmented genes as well as potentially transcriptional active intact genes [36]. Later, evidence was found that the Bs of the grasshopper, Eyprepocnemis plorans, harbor at least ten genes, among which five genes are expressed [37]. Recently, genomics and transcriptomics based analyses have found several transcriptionally active sequences on the B chromosomes [37][38][39][40][41][42][43]. Taken together, these findings have sparked an exciting debate about the genomic composition, function and evolution of Bs. Here, we sequenced and analyzed the B carrier genomes of the insect A. flavolineata and the fishes A. correntinus and A. mexicanus to reveal their B-linked repetitive and gene content, to test the hypothesis that the B chromosome accumulates sequences from its host genome for its selfish transmission, and to investigate if the preferential accumulation of these sequences is a conserved feature in multiple species. We found evidences that considerable amount of genomic portions have been migrated from A chromosomes to B via transpositions, duplications and rearrangements events. Unlike classical theories that B chromosomes are gene poor, we found that they are gene rich and contain many protein-coding genes. It seems that B chromosomes tend to gain sequences that are crucial for their own establishment inside the cell. Besides the genes that may give transmission advantage to Bs, there are others coding for many important biological processes.
NGS data and coverage-ratio analysis detect sequences on the B chromosomes
The karyotype analysis identified diploid chromosome numbers (without B) of 36 and 50 for A. correntinus and A. mexicanus, respectively. The 2n = 36 for A. correntinus consists of 12 metacentric, 16 submetacentric, 2 subtelocentric, and 6 acrocentric chromosomes while the 2n = 50 for A. mexicanus consists of 8 metacentric, 18 submetacentric, 12 subtelocentric, and 12 acrocentric chromosomes. A large sub-metacentric B chromosome was found in 9 (5males and 4 females) out of 21 samples of A. correntinus karyotyped. For A. mexicanus, a tiny dot shaped B micro-chromosome was detected in A. mexicanus (Fig. 1a, b). Interestingly, across a total of 39 analyzed individuals, the B in A. mexicanus is found only in males, whereas no female with B was found, therefore indicating a possible B male-specificity. Out of the 39 karyotyped individuals, 12 were 'B+ males', 8 'Bmales' and 19 'Bfemales'. In some individuals of A. mexicanus we observed 2B micro-chromosomes. The karyotype analyses of the grasshopper A. flavolineata (Fig. 1c) confirmed 1 or 2 submetacentric B chromosomes. A total of 69 individuals including 32 males and 37 females were karyotyped, out of which 14 samples had 1B while only 3 samples were found to carry 2B. The male regular karyotype is comprised of 2n = 23 without a B chromosome (14 subtelocentric + X chromosome subtelocentric + 4 submetacentric + 4 metacentric).
We sequenced a total of 8 samples across all 3 species which generating data of around 124×, 82× and 28.1× total sum of all individuals coverage for A. mexicanus, A. correntinus and A. flavolineata respectively ( Table 1). The mapping of filtered reads to reference genomes (both from database and de novo assembled) resulted an overall alignment rate of around 92, 91 and 95% for A. mexicanus, A. correntinus and A. flavolineata. The B chromosome sequencing reads were mapped to reference genomes; hence the mapped reads representing a B chromosomal region in B+ sample would have an increase coverage level as compared to the aligned reads The plots are given for each corresponding species to show the comparison between B-(0B) and B+ (1B and 2B) coverage. The significant higher coverage of B+ (red peaks) as compared to B-(blue peaks) indicates the amplified genomic region on the B chromosome and extremely underrepresented sequence of this region on the A chromosomes. The X-axis and Y-axis represent read depth and genomic position of the Bblock. The blocks are named according to their position in the respective genome assembly. The scatterplots provide the comparison of read abundance for the extracted blocks (upto 2000 reads) between the B-and B+ genomes. Each red dot in these plots is a single block, with X-axis and Y-axis representing the number of mapped reads for B-and B+ genomic libraries. Notice that the blocks above the diagonal lines inclining towards the Y-axis, providing evidence to the extracted B-blocks with higher reads coverage of B+ harbor extra copies of these sequences on B chromosomes in B-sample (see summary of coverage detection steps as Supplementary Fig. S1). These regions having remarkably higher coverage, called B blocks, were detected in the genomes of each of the three model species ( Fig. 1 (Fig. 2a). There were multiple regions in the B+ genomes with multiple B blocks in close proximity to one another, suggesting that a larger region was likely transferred to the B chromosome as a whole segment rather than as multiple smaller segments. Sequencing of B+ samples yielded a lower genome coverage (around 10×) for A. flavolineata due to its large estimated genome size (6.3 Gb; Table S1), thus we could not derive a comprehensive list of B-blocks, and subsequent gene integrity analysis, for this species. However, while incomplete, we were able to find a considerable amount of B chromosome sequence for this species (Supplementary Fig. S4).
To identify the intact genes on the B chromosomes, we calculated an integrity score for each gene sequence annotated in the B-blocks. The majority of B-located genes of A. correntinus (91%) and A. mexicanus (93%) have integrity scores < 50% ( Fig. 2a; Supplementary dataset 2). The NGS data analysis indicating the higher number of B-blocks and number of repeats and genes for A. correntinus as compared to A. mexicanus (Fig. 2a) coincides with the karyotype data with respect to B chromosome size.
The functional annotation of genes detected on the B chromosomes was determined and gene ontologies enrichment was performed. We considered the complete list of genes (including both fragmented and integral) for the microB of A. mexicanus. Only genes with an integrity percentage > 50% were considered for the macroB of A. correntinus due to the large number of gene fragments observed. The GO analysis for these genes on both microB and macroB revealed a enrichment for genes in cellular processes, such as microtubule processes, transpositions, recombination, and telomere maintenance, all groups with remarkably high -log10 Pvalues (Fig. 2b). These functions are significantly over represented on both microB and macroB, which indicate that the B chromosome tends to gain gene contents to maintain its transmission in cell division and facilitate its evolutionary success.
We retrieved a list of high integral genes detected on the Bs that are directly involved in chromosome formation and cell cycle related functions (Table 2). These cell cycle genes were found in all of our analyzed species indicating their importance in the establishment and maintenance of Bs. To further extend our insights into the protein coding sequences found on the B chromosomes, we employed a comparison based strategy using a difference in the counts of mapped reads against the reference transcript contigs. The mapping of the Illumina reads from the Band B+ genomes on the coding sequences (CDS) of transcriptomes revealed a total of 38,071 reads for A. mexicanus, 34,301 for A. correntinus and 3916 transcripts (contigs) for A. flavolineata with more than 40 reads mapped for both B-and B+ genomes. Graphical representation of the B-and B+ showed the presence of CDSs over-represented in the B+ genomes (Fig. 3, Supplementary Figs. S5 and S6). Remarkably, a total 100 and 53 CDSs showed a log 2 2B/0B quotient > 1.5 for A. mexicanus and A. flavolineata respectively, and 436 CDS for A. correntinus with a log 2 1B/0B > 1 i.e. the expected value if each B chromosome carried at least one copy of the CDS (see methods). Annotation revealed that most of these CDSs were orthologous to different protein-coding genes in the reference set of genes while others were also identified as repeat elements. Some of the CDS did not align to the references, therefore we termed them as nonannotated or unknown. The annotation detected several novel genes on the B chromosomes of Astyanax species. The CDS with the highest log 2 quotient representing a high confidence of B chromosome presence are listed a Table 3 and Supplementary dataset 3 for each species. The coverage pattern for some of these CDS were also visualized to confirm the higher peaks for B+ as compared to B-genomes, hence providing evidence of their expanded copy number on B chromosomes (Supplementary Figs. S7, S8, S9).
The Bs harbor unique and exclusive sequences
To reveal the unique sequences found on the B chromosomes, we first performed de novo assemblies of the B+ genomes. The reference B+ genome is expected to contain both A and B chromosome sequences; thus the comparison of mapping B-and B+ genomes revealed the B specific sequences. These B-specific sequences were analyzed on the basis of reads mapped to the reference de novo assemblies with B chromosomes. The sequences were compared in a way that there were no alignments recorded for B-while the B+ genomic reads should have uninterrupted alignments with minimum 50× coverage in the same region (see methods). From these alignments, a total of 140; 1698; and 247 number of exclusive B regions with a minimum of 200 bp sequence length, were obtained for A. mexicanus, A. correntinus, and A. flavolineata respectively. We were able to extract sequences up to 3 kb long where alignments of at least 50× coverage were recorded along the complete region (entire sequence covering B) for the B+ but negligible alignments for B-genomes. Interestingly, we found that the number of mapped reads increased proportionally with the number of Bs, ( Fig. 4; Supplementary dataset 4) but remain null in B-genomes. Blast search of these exclusive Bs' sequences against "nr" NCBI database did not return any feasible hit, indicating their unique and novel nature. The 'somewhat similar' search fit weak hits with mitochondrial genes for most of these sequences in A. flavolineata.
The micro-B of cavefish was invaded by satDNA and amplicon gene like sequences To investigate the abundance of sequences in B+ genomes and to validate coverage-based identification of B chromosome sequences, we used qPCR for relative copy number quantification of 10 randomly selected B blocks in A. mexicanus with 0B, 1B and 2B genomes. The GDR values were determined using qPCR results. Higher GDR in 2B and 1B genomes as compared to 0B was confirmed for all the total 10 representative blocks that were selected for this analyses, thus confirming our NGS analysis and coverage approach ( Fig. 5; Supplementary dataset 5). In addition, fluorescence in-situ hybridization (FISH) of the two selected B-blocks further confirmed their abundance on the micro B (Fig. 5). The FISH showed specific concentrated signals on the B and some subtle small hybridization signals on a few A chromosomes. The coverage abundance, qPCR and FISH results indicate a strong correlation between NGS and experimental approaches and validate that these sequences are highly amplified on the B of A. mexicanus. These two FISH-mapped sequences were annotated as apa-sat 26-129 satellite and tnf-8 like gene that appear to be highly To further investigate the B blocks chromosomal organization, we also performed double FISH mapping of randomly selected blocks in A. flavolineata. Although these blocks did not evidence any B-specific abundance, the A and X chromosomes showed distinct hybridization marks, mostly clustered in telomeric regions (Supplementary Fig. S10). However, relative less abundant and scattered signals were observed on the B chromosome, for certain candidate blocks. The mapped candidate sequences were searched against the 'nr' database, and no similarity was found with any gene or repeat element, and were assumed as unknown/uncharacterized genomic regions.
A majority of B-localized sequences indicate a high level of methylated Cs within CpGs of the cavefish genome The GO enrichment analysis of cavefish B-localized genes indicated the term "methylation" was among one of the most enriched terms (Fig. 2b). Therefore, we took advantage of the available bi-sulphite sequencing data in NCBI/SRA database of the cavefish to call methylation of the B chromosome sequences. To analyze the methylation level of the micro B sequences in the cavefish genome, we mapped the bi-sulphite treated reads to the Bblocks, which were sequenced previously by Gore et al. [45]. The Bismark mapping of bisulphite Illumina reads to the B blocks of the cavefish yielded a total of 79,288, 266 methylation call strings with a total of 32% mapping efficiency. The methylated Cs in the CpG context was 52.1%, remarkably higher as compared to the methylated Cs in the non-CpGs context, which was only 16.2% ( Supplementary Fig. S11). These data show that the Cs of B chromosomal sequences are hypermethylated within CpGs regions. The original bottom strand (OB) alignments show that out of a total of 18,340 B blocks, there were 6035 blocks with more than 50% methylated Cs within the CpG context. In contrast, 7560 B blocks were found with less than 50% methylated Cs and the remaining 4745 blocks were unmethylated. Out of 6035 blocks (> 50% methylated Cs), there were 774 B blocks that reported a high level (> 90%) of methylated Cs, suggesting these B-localized sequences might have been The coverage plots (middle) as an example block for each species depicts the reads depth confirm the exclusive representation on 1B and 2B genomes. The mean coverage plots (left) of all exclusive blocks show the fraction of the genome with respective coverage. Notice the mapped reads, reads depth and mean coverage of 0B genomes in each species is negligible, thus confirming the absence of these sequences on A chromosomes and specificity to B chromosomes repressed or down-regulated due to hypermethylation (Supplementary dataset 6). We further detected a total of 722 CpG islands in the microB sequences.
Repeatome landscapes indicate an abundance of LTRs and DNA transposons on the B chromosome of Astyanax
We performed a comparative analysis to investigate the relative TE abundance and detect any possible differences in their contents between the regular A chromosomes and B chromosomes in Astyanax species. Interestingly, we found that the Bs recorded a higher percentage of TEs, DNA transposons and especially LTR elements as compared to the A genome (Fig. 6a, b). The repeat landscapes of both A and B chromosomes have a larger amount of DNA transposons and LTRs insertions (Fig. 6) reflecting a wave of transposition has occurred Fig. 5 The invasion of amplicon sequences on the micro B of A. mexicanus experimentally confirmed using qPCR and FISH. a Coverage plots of apa-sat 26-129 satellite and tnf-8 like gene along with the respective GDR qPCR results comparison between B-and B+ genomes. The higher coverage and GDR in B+ genome indicates the duplicated copies of these sequences on the B chromosome. b FISH mapping further validated these sequences and showed specific marks (red) in the micro B (white arrows). The metaphasic chromosomes are counterstained with DAPI (blue). c Coverage plots of representative B blocks on micro B with corresponding GDR of qPCR. The BLASTn alignments of these representative blocks to Ensemble annotation databases, resulted in several overlapping genes, such as tnf-8 (function: cell death), dpysl2b (function: microtubules binding activity), fgf11b (function: development, morphogenesis, mitogenic and cell survival activities), Zinc finger BED domain daysleeper like (function: chromatin remodelling), zgc:77262 (function: mRNA splicing) ralgps1 (function: cytoskeleton organization) and dchs (function: cell adhesion) Fig. 6 Comparative analyses of TE composition between A and B chromosomes. a Comparison of repeat landscapes of TEs provide insights on their evolutionary history in both A genome and the micro B and macro B of A. mexicanus and A. correntinus. The X-axis shows the percent of TEs in the genome while Y-axis represents the Kimura distances that ranged from value 0, representing recent TE copies, to 50 for the old TE insertions. Black arrows indicate the recent wave of transpositions in the genome of Astyanax genus (black arrows point to transposition waves). The higher abundance of LTRs (green) and other retroelements (blue) in the B chromosome landscapes can also be observed. Green arrows point towards the difference between abundance of A and B chromosome LTRs. b. Donut charts show the comparison of repeat composition between the As and B. The outer and inner rings depict A and B chromosomes respectively. Again, the higher percentage of LTRs and retroelements confirm their relative abundance on the B as compared to A chromosomes. Noticeably, the simple repeats percentage was higher on the Bs. c. FISH of representative elements on metaphase chromosomes of A. mexicanus and A. correntinus with B chromosomes analyzed for the organization of Tc Mariner, Gypsy and Rex elements. A dispersed pattern among diverse chromosomes, including Bs, was observed. Magnified view of B chromosomes is shown with the presence of markings of corresponding elements. The abundant signals of these TEs are indicative of their copious nature in Astyanax genome and parallel with the landscapes analyses during the genome evolution of Astyanax. Remarkably, the FISH mapping of some these (representative) elements confirmed the repeatome landscapes analysis (Fig. 6c). We found the typical dispersed signals of hybridization for the respective FISH probes of Gypsy and rex (retrotransposons) and Tc-Mariner (DNA transposon) on the B chromosomes (Fig. 6c). Both FISH and bioinformatics analyses showed that these elements are scattered throughout the genome of Astyanax. These elements appear widely distributed throughout all chromosomes with some specific concentrations on certain regions. The wide distribution of these elements across all chromosomes indicates a series of transposition events occurred during the karyotype evolution of Astyanax. In addition to TEs, the Bs contain a remarkably higher proportion of simple repeats as compared to As (Fig. 6b).
We also analyzed rDNA clusters by FISH to visualize rDNA organization in the A. mexicanus genome. The NGS annotation of B-blocks in A. mexicanus did not reveal any 45S rDNA clusters, indicating an absence of 45S rDNA on the micro B chromosomes. FISH confirmed the absence of signals on the micro B as indicated from NGS data ( Supplementary Fig. S12). However, we identified eight sites of rDNA clusters on the A chromosomes, with a preferential distribution to terminal portion of the short arms of resident chromosomes.
Comparative genomics analysis deciphers rearrangements and sheds light on the evolution of B chromosome
Using a reference-guided approach, we successfully anchored our short read B-and B+ Illumina assemblies of A. mexicanus and A. correntinus into chromosomes and performed a comparative genomics analysis of these genomes. The whole genome alignments of B-and B+ assemblies and the use of syRI software [46] identified a total of around 6.3 Mb rearranged sequences in the B+ genome including various types of genomic rearrangements such as duplication, inversions, translocation, insertions, extra copies gain and tandem repeats ( Fig. 7a; Fig. S13). We detected that a total of 1.13 Gb (87%) of both genomes shared synteny regions with each other. In addition, a considerable amount of these rearrangements were detected within the unplaced scaffolds (Fig. 7 III, circos plot), which represent a majority of the B chromosome sequences. The de novo B+ assembly of the cavefish genome was mapped with the repeat masked assembly representing only coding sequences to reveal the synteny patterns ( Supplementary Fig. S14).
We further traced the B associated patterns indicative of duplication and inversions through self-aligned syntenic dotplot analysis generated from the comparative analysis of B blocks of the three species ( Fig. 7b; Supplementary Figs. S15, S16, S17). A close view at the syntenic dotplot shows that there is an overlap of the lines when the lines are projected to one axis or the other. The patterns of segmental duplications and inversions as visualized in these dotplots suggest the possible chromosomal rearrangements that might be the main evolutionary forces to derive the B chromosome sequences.
The B chromosomes of multiple species exhibit similar functional behavior but different genetic contents Table S3). The pseudo scaffolding based strategy for assembling these chromosomes with a spacer length 10 kb was considered for annotation and gene ontology analysis.
Although the NGS data of micro-dissected Bs does not cover the complete sequences, we present an estimated and preliminary assembly and annotation of their genes and repeat contents. The repeat annotations of these Bs showed that they have different levels of each repeat across different species (Fig. 8a). The Bs of fish species (B1 and B2) are mainly comprised of simple repeats. Other repeat types such as low complexity and DNA transposons were also detected in abundance for Bs of fish but lacking SINEs and satellites. Similarly, the Bs of grasshopper (B3 of E. plorans) were also enriched with simple repeats but notably, the second highest number of repeats sequences were retroelements (SINEs and LINEs), which were not abundant in B1 and B2 of fish. On other hand, the Bs of Apodemus species (B4, B5 and B6) contained an abundance of SINEs and LINEs but lack the amount of satellite sequences, which are found in higher number in grasshopper species. The gene annotation recorded for all microdissected Bs revealed several genes overlapping with their reference annotations. Due to the low coverage of sequencing data, the number of genes in micro dissected B sequences can be underrepresented. Interestingly, the GO enrichment analysis of the Bs in different organisms shared some common over represented functions such as metabolism, development and morphogenesis (Fig. 8b, c, Table S4; Supplementary dataset 7). Moreover, the Bs of these organisms might exhibit a similar functional behavior, for instance the enriched functions like cell cycle, mitosis, chromosome organizations, telomere maintenance, microtubules and spindle organizations (Fig. 8d). These results highlight the importance of genes associated with these functions in the evolutionary survival of the B inside the cell.
Discussion
The current work demonstrates a high throughput genomic analysis of B chromosomes in two candidate vertebrates and one invertebrate species as well as the microdissected Bs sequences of diverse organisms. We present a comprehensive analysis of A. correntinus, A. mexicanus and A. flavolineata genomes for both B+ and B-individuals with the aim of unveiling the genomic composition, structure, functional and evolutionary dynamics of B chromosomes in these species. Applying a comparative coverage technique, we detected a total of 43.82 Mb and 15.41 Mb of different A chromosomes of A. correntinus and A. mexicanus respectively, that has Supplementary Figs. S15, S16, S17 (Supplementary dataset 7). a Repeat contents comparison of analyzed micro dissected B chromosomes from diverse species. The bubble charts have been merged for all Bs showing the type of content in different colors. Each bubble is a repeat type while each bar indicates a species. The differences between repeats abundance among species suggest that amount of these elements in Bs is subject to their abundance in A genome dependent of species. For example, the Bs of mouse species (B4, B5 and B5) have acquired a higher amount of retrotransposons SINEs, LINEs and LTRs as depicted by yellow, green and red bubbles and lack abundance of satellite DNA. While on other hand, grasshopper Bs (B3) gained a considerable amount of satellite DNA apart from the domination of simple repeats and other elements. b Bar chart show the number of enriched and not enriched functions on the Bs of each species after the Fisher exact test. c Upset plot represent the comparison of GO among the Bs of all organisms analyzed in this study. A total of 10 GO are shared across all studied species (Table S4). An arrow points to an important GO term "nucleus" that is common among all Bs. The Y-axis corresponds to GO intersection size while X-axis represents the unique and shared GO terms. d Enrichment clustering heatmap plots are given for the micro dissected Bs as well as high confident genes (log2 ratio > 2) detected on the B of A. flavolineata. The abbreviation Lc, Ep, Afn, Ap, Afl, Ac, Am refers to L. calcarifer, E. plorans, A. flavolineata, A. peninsulae, A. flavicollis, A. correntinus and A. mexicanus respectively contributed to the B chromosomes composition. We found that at least 246 Kb and 58 Kb of unique sequences are exclusive of the Bs in A. correntinus and A. mexicanus respectively, that do not occur on the regular A genomes. These NGS results are parallel to the size of the corresponding macro and micro Bs as were observed in their karyotypes, demonstrating that the coveragebased approach was successful in deciphering a considerable amount of sequences on the Bs. Our characterization and annotation of B blocks in Astyanax species featured a higher amount of gene content and the number of blocks for A. correntinus as compared to A. mexicanus which is also in agreement with karyotype data. The number of detected blocks of A. flavolineata was underrepresented due to low overall coverage of reads obtained as compared to its giant genome. Nevertheless, we were able to detect at least 2.05 Mb of A chromosomal DNA copied into its B, with 194 Kb of Bexclusive sequences.
To perform a deep survey of DNA repeats, we applied a combination of approaches to predict major TEs and their abundance in the genomes and to perform comparisons of the B+ and B-repeat content. Our repeat analysis showed that A. correntinus and A. mexicanus genomes are comprised of 66 and 35% repeats, respectively, with a domination by DNA transposons, which is comparable to most published fish genomes [47][48][49][50][51][52]. We highlighted the major repetitive contents of the Bs and our analysis identified TEs, including retroelements, of the B chromosomes of multiple species. Several studies have previously found that Bs are generally enriched with TEs [4,24,33,[53][54][55][56][57][58][59], suggesting that TEs are the principal migrants of the Bs that may be key players in the insertion of other sequences from As into B chromosomes during evolution. We found a high enrichment for the GO term "transposition" (Fig. 2b) in the Bs of A. mexicanus and A. correntinus, providing evidence to support this hypothesis. The high level of gene fragments on the Bs (with < 50% integrity) indicate that genic sequences might have been either inserted as fragments, or broken during migration from As into B chromosomes as a result of series of transposition events. Although the microdissected B chromosomes sequences data was not sufficient to draw a conclusion about their repeat contents, the comparative analyses provide an overview to hypothesize that the B chromosomes repeat contents can vary among different species depending upon the repeat content and abundance within the A genome.
The annotation of B-blocks revealed that the Bs contain many gene-like sequences. Our integrity analysis showed that Bs contain many fragmented genes which are possibly pseudogenes and might have formed from their parental genes on A chromosomes during their incorporation into B chromosomes. These putative pseudogenes may have lost their functional ability after duplication from the parental A genes. However, previous work [60] reported that the B of rye harbors pseudogene-like fragments, which are expressed in a tissue specific manner and thus might retain function.
In addition to the fragmented genes, there are complete genes that have remained intact, possibly due to their role in the evolutionary survival of the B chromosome. These findings support the emerging hypothesis reporting Blocalized genes [36,37] according to which B chromosomes accumulate cell cycle genes that might play an important role in their transmission. Table 2 lists integral B-localized genes in multiple species that are directly involved in cell cycle regulation and chromosome organization processes, including proteins coding for a variety of functions such as chromosome segregation, spindle fibers, microtubules, chromatin organization, chromosome condensation and regulation of the cell cycle. The enrichment of genes associated to cell cycle and chromosomes functions on both microB and macroB of Astyanax, suggests that independent of evolutionary stages of B, the gains of such genes benefits its transmission, which further reflects its selfish behavior. Remarkably, the GO enrichment analyses of different micro dissected Bs in different species revealed similar patterns of functions, thus providing evidence to corroborate the emerging hypothesis that the evolutionary success of the B chromosome lies on its gene contents. Enrichment analysis detected diverse GO terms for important biological roles such as metabolism, cell adhesions, reproduction, stimulus response, localization, morphogenesis and methylation. Genes with such functions were also reported to be located on B chromosomes in previous studies (see a comprehensive and updated list of B genes in the review by Ahmad and Martins [2]). Among the B enriched functions, we found that most of them are involved in developmental processes, particularly morphogenesis. The gene indian hedgehog b (ihhb), involved in morphogenesis, was previously identified as highly duplicated on the B chromosome of the cichlid fishes [61,62]. These commonly enriched functions shared among the Bs of different species suggest that B chromosomes exhibit a conserved behavior to acquire a certain role, although their genetic makeup in may vary across different taxa. Notably, the higher level of metabolism related B-genes in A. mexicanus are interesting because this fish species has been reported to have a more efficient metabolism as compared to other fishes [63]. The cavefish has adapted evolutionary traits to tackle the scarcity of food and the cave environment. Of these traits, the adaptation to evolve sensitive mechanosensory organs and chemical senses are the significant compensatory changes due to possibly strong selective pressures. The GO enrichment of such functions on the B of cavefish offer exciting insights into whether B chromosome provide extra genomic compartments for the evolutionary success of this species, suggesting that Bs might have played some role in shaping the genome evolution for effective adaptation in cave environment. Metabolism related genes have been found on the Bs of cichlids [2,36] and interestingly these fishes use mechanosensory receptors mainly for mating and species recognition, and consequently, specific metabolism is required. We therefore hypothesize that the B chromosomes plays a role in adaptation acting on metabolisms.
Besides the genes discussed above, there are enriched genes related to reproduction detected on the Bs of A. correntinus, suggesting that Bs can also have a functional impact on sex determination, as previously described by Yoshida et al. [61] in cichlids. Our karyotype data of around 60 individuals in A. mexicanus, revealing malespecificity of the B chromosome, also point towards this phenomenon that the presence of Bs may have some role to determine the sex in Astyanax [64].
The BLAST results of B chromosomes exclusive sequences did not revealed any significant homology with 'nr' database, thus indicating their novel and unique nature. However the similarity of few weak hits to mitochondrial genes shows that such sequences sourced from mitochondria, might have been inserted into B of the A. flavolineata and subsequently might have evolved into novel sequences. The mitochondrial gene MTG1 (mitochondrial GTPase 1) has been recently reported on the B of another grasshopper, E. plorans [37]. Previously, mitochondrial sequences and B chromosomes in grasshoppers have been reported for their role in the substitution and variations among different populations [65]. Most (more than 90%) of the exclusive B chromosome sequences of Astyanax were characterized as novel and unknown genes as well as some long non-coding RNA sequences.
The epigenomic profiles of the microB in the cavefish genome provide a rough view of the methylation status of B chromosome sequences. Although our data analysis supports that most of the B chromosome sequences in the cavefish are likely methylated mainly within CpG regions, it still remains to be seen whether there is any methylation based repression of any gene and whether these epigenetic changes might have any phenotypic variability effect. Furthermore, since methylation patterns are often tissue specific to effect the regulation of genes, a comparative analysis of these sequences between B+ and B-from the same tissue types would be much more informative to obtain clear profiles of differentially methylated and expressed regions, and explain the impact of B chromosomes in this context. Taken together our analysis, suggest that DNA methylation of B chromosome sequences might be one of the principal mechanisms to mediate the repression of many B-localized genes and prevent any further phenotype impact that might have happened due to the occurrence of micro Bs in the cavefish genome.
The rearrangements analysis of the B-versus B+ genomes suggests that the cavefish genome exhibits extensive rearrangements that might have shaped its extraordinary evolution for adaptation. Moreover, the comparison of the B blocks to their ancestral A genome regions allowed us to infer the evolutionary mechanism that led to Bs. The homology of B sequences to different A chromosomal sequences indicates that after the proto B was formed, it might have gained sequences from across the rest of the genome and subsequently experienced duplications and rearrangements. An emerging model proposes that the B sequences are most likely inserted through subsequent transposition, duplication and rearrangement events to form the B chromosome (see an evolutionary model as Fig. 9). The different sizes of the B blocks ranging from few hundred to thousands bp indicate that the larger regions might have migrated to Bs after the formation of a proto B. The abundance of TEs in these blocks suggests that transposition facilitated the movement and incorporation of these sequences, followed by duplications events as detected in our analyses.
Although the identification of both fragmented and complete genes on the Bs provides interesting insights, it remains unknown whether these genes are active. While we predict most of the fragments are pseudogenes, further analysis of transcription levels will assist in understanding exact structure and function of these gene fragments. It is possible that the enriched fragmented genes on B chromosomes may represent gene fusions, and thus may be transcriptionally active but could have altered functions from their progenitor genes. Furthermore, actively transcribed fragments from these truncated partial genes may have some function in regulating the activity of other genes through interference. The transcriptional activity of B-located genes involved in cell cycle has been found in a grasshopper species, E. plorans [37]. While there are few other studies that have confirmed the transcription expression of B-located genes [38,42,60,66], an analysis to test the function of our B detected sequences will serve to find out the active genes playing a role in controlling B chromosome behaviors such as sex bias and drive. A better understanding of the structure and function of the B can be achieved by a complete high-quality B chromosome assembly, that will be a priority step in future for B chromosome researchers.
Conclusions
This paper offers contributions about the genomic composition, evolutionary and functional aspects of multiple B chromosomes in different species. Applying a coverage based comparative approach we detected a considerable amount of B chromosome segments that contain many gene fragments, few complete genes and an abundance of TEs along with other repeat types. We revealed that the B-localized genes are associated with diverse functions, some of which may explain the evolutionary fate of B chromosome. We also found patterns of genomic evolution such as duplications and rearrangements events that might have shaped the evolution of B chromosomes. Taken together, we conclude that the Bs, which were believed for a long time to be inert elements, may in fact participate in relevant genome function and evolution. Our present research opens new avenues and interesting prospects for future research and therefore encourages further studies to investigate the expression of detected B-localized genes to decipher their role in a myriad of cell processes.
Methods
An overview of the methodology is illustrated in Supplementary Fig. S1.
Model organisms and karyotyping
We obtained the specimens of A. mexicanus (cavefish) from a local fish store in Botucatu, Sao Paulo, Brazil. All the cavefish animals used in the present study were sourced from the same commercial company, namely "Aquarismo Aquamundi Botucatu". These specimens were then maintained for further karyotype analysis in the fish facility of Integrative Genomics Laboratory of UNESP -Sao Paulo State University. The specimens of A. correntinus were collected from its natural habitat in the Iguaçu River, in the stretch with around 25 km between downstream of the Iguaçu Falls and its mouth on the Paraná River, Brazil. All the A. correntinus specimens used in the present research were obtained with the permission and ethical approval of Western Paraná State University (UNIOESTE). The grasshopper A. flavolineata specimens were obtained from their natural habitat in Rio Claro, Sao Paulo, Brazil. The experiments involving all animals were performed according to agreement of ethics set by Brazilian College of Animal Experimentation and the use of specimens in the experimental work were approved by ethical committees of Institute of Biosciences/Unesp (Protocol no. 769-2015) and CEEAAP/Unioeste (Protocol 13/09). A total of 129 animals were used in the present study including 21 A. correntinus, 39 A. mexicanus and 69 A. flavolineata individuals. The total number of animals was required to perform the karyotyping and observing B chromosome occurrence and frequency in individuals. The animal size sample was calculated according to the experiment requirements: to determine the B carrying individuals; raise samples with chromosome quality for karyotyping and FISH mapping; to determine the ration of 0B, 1B and 2B individuals; comparative male and female prevalence of Bs; and genomic DNA extraction for qPCR and genome sequencing.
All animals were euthanized in order to be dissected and extract tissues for chromosomes preparation and genomic DNA extraction. The fishes were submitted to euthanasia by immersion in eugenol 1% anesthesic for three minutes. The grasshoppers were anesthetized with ethyl ether for about 10 min. The chromosome preparations of Astyanax fishes were obtained from anterior kidney cells using 0.02% colchicine treatment for 30 to 40 min following the protocol of Sumner [67]. Mitotic chromosomes of A. flavolineata were obtained from the embryos. The karyotyping procedure involved classical Fig. 9 A schematic view of B chromosome evolution. During the first step, a proto-B is derived from multi-A sequences as a result of genomic rearrangements. The proto-B gains the sequences from A genome for its survival and successful transmission. In the second step, the proto-B accumulates further sequences with a series of TE insertions, ampliconic sequences, gene like fragments and the formation of unique sequences that are specific to the B. Finally, a mature B evolves, providing extra genomic material which may contains genes for diverse functions cytogenetics using a Giemsa stain to identify different chromosomes as metacentric, submetacentric, subtelocentric or acrocentric. Thirty metaphases spreads from each individual were analyzed and ten best mitotic metaphases were used to measure karyotypes for each species. The male individuals were identified and confirmed from histology of testis. Individuals carrying B (B+) and those without B (B-) chromosomes were identified by karyotype analysis. The genomic DNA samples of B-and B+ male individuals were analyzed on agarose gels to verify the integrity and quantified by nanovue spectrophotometer and Qubit Flourometer to obtain the information about concentration as required for sequencing.
Illumina next-generation sequencing
After performing quality control (QC), qualified DNA samples were processed for library construction. A total of eight samples of all male individuals including 0B (−B) and 1B and 2B (B+) of model organisms were sequenced using HiSeq Illumina (Table 1). Genomic DNA of all eight samples was randomly fragmented to prepare sequencing libraries followed by 5′ and 3′ adapter ligation. For each individual sample, a separate set of libraries were constructed and paired end sequencing was performed with the read length of 151 bp. Raw data from Illumina's HiSeq machine was processed with Illumina software to generate Fastq files. The Illumina reads were screened for sequence quality using FASTx toolkit [68], low quality reads were discarded, and adapters were trimmed using the Trimmomatic tool [69]. Specific filtering parameters were set in the commands according to requirements for removing adapters and poor quality reads as per the FastQC tool [70] report. Filtering of reads was performed by FASTx toolkit using parameters set to quality number 28 and percentage value 80 for alignments.
Genome alignments and de novo chromosome scale reference-guided assemblies
The chromosome scale assembly of A. mexicanus genome [45] was used as the reference genome for alignments of A. mexicanus sequencing datasets. The genome assembly of A. mexicanus-2.0 (GCA_000372685.2) was downloaded from Ensembl (https://www.ensembl.org/ Astyanax_mexicanus/Info/Annotation) [71] and used as a reference genome for alignments of A. mexicanus B+ and B-reads. To create assembly references of genomes containing B chromosomes, we assembled the Illumina reads of the B+ samples for A. mexicanus, A. correntinus and A. flavolineata using SOAPdenovo [72]. We determined different K-mer size based upon the read length, sequencing depth, the total genome size and the computer memory for each size for each of the species. We performed several trial runs of the assembly and chose the best Kmer values (93 for A. mexicanus, 63 for A. correntinus and A. flavolineata), which yielded the maximum N50 and N90 in the finalized assemblies. We mapped the filtered Illumina B-and B+ genomic reads against their reference assemblies using the "very sensitive" parameter of Bowtie2 [73]. We further de novo assembled the B+ and B-(short reads assemblies) genomes separately for A. correntinus and A. mexicanus and anchored the scaffolds into chromosomes (linkage groups) by Ragoo assembler [74] using the retrieved chromosome level assembly of A. mexicanus as the reference genome. The evaluation of the de novo assemblies was obtained using QUAST software [75] by computing several metric values (length, number, length variation, N50, gap length). Refer to Supplementary Note 1 for more details (Supplementary Table S1).
Coverage analysis and identification of B chromosome regions
The identification of sequences present on B chromosomes was performed using statistical parameters of aligned reads coverage comparison between B+ and Bgenomes, as proposed by Valente et al. [36] with modifications to improve the analysis. The sites with at least 15× reads coverage in B+ and B-genomes were selected, and the per-base coverage of the B+ and B-genomes were investigated using bedtools [76] to measure the mean B+/B-coverage ratio (MC). Then, normalized coverage (NC) was obtained as: NC¼ Raw coverage Region size MC . The mean ratio (MR) and standard deviation (MRSD) for the genome region with most similar size and raw coverage, not containing B sequences, were obtained with NC. Next, the B+/B-regions with coverage < MC 2 were removed and B+ ratio (BPR) was calculated with NC Bþ NC B − . Regions with BPR ≥MR + (SD × N) were selected, were N is the number of SD required to determine a block. This way, we were able to set an estimated threshold for detecting the extra copy of A chromosome sequences in a B+ genome which can be regarded as putative B chromosomal sequences, known as "B-blocks". Our custom python script for identification of these blocks is available online at github (https://github.com/ farhan-phd/Integrative-genomic-analysis-reveals-thegene-contents-repeats-landscapes-and-evolutionary-dynamics/tree/master). The B-blocks were constructed using two levels (200 bp and 1 kb) of tolerances; for example a level of 200 bp means that the B sequences within 200 bp regions and that have mean ratio greater than, or equal to, the established value can be considered one part of the same block. In this way, four different sets of B-blocks (0 stdv and plus 2 stdv, both with tolerance of 200 bp and 1 kb) were obtained. Finally, the blocks with 2 stdv and 200 bp, the most stringent conditions, were considered for further analysis. The B-blocks were manually visualized using J-browser [77] and comparative plots were created using the "Sushi" package [78] of Bioconductor (https://www.bioconductor.org/) pipelines in R.
In addition to the coverage ratio analysis, we also isolated the sequences that were located exclusively on the B chromosome and completely absent on the As. We screened the B containing genome and analyzed the read alignments in a way that a minimum of 200 bp region does not concede any B-alignments at all, whereas at least 50× coverage B+ alignments will have continuous and uninterrupted reads mapped at the same region. The complete absence of B-alignments means that the respective region is missing in the 0B genome (A chromosomes) and due to significant representation of the B+ reads aligned to this region(s), this sequence is potentially specific to the B chromosome. The reads alignments were measured for each exclusive B sequence of all B-and B+ genomes with the Bedtools coverage pipeline. The fasta sequences of B blocks and exclusive regions have been provided as Supplementary datasets 8,9,10,11,12,13.
Search of protein coding sequences on Bs
To screen the B chromosomes of our model species for protein coding sequences, we employed a similar approach as Navarro-Domínguez et al. [37]. The transcriptome assemblies of A. mexicanus (used as reference for A. mexicanus and A. correntinus) and Locust migratoria (used as reference for A. flavolineata) were retrieved from NCBI database (accession IDs: GDIO00000000.1, PRJNA237016). Against these reference transcriptomes, we mapped the reads obtained from the B-and B+ genomes, using "local sensitive" parameters of Bowtie2. We calculated the total quantity of reads that aligned for each transcript and compared the difference between the respective abundance of B+ and B-reads using an available python script published by Navarro-Domínguez et al. [37] (https://github.com/fjruizruano/ngs-protocols/blob/master/count_reads_bam.py). The putative B-located coding sequences (CDS) were identified on the log 2 (B+/B-) ratio considering a minimum of 40 aligned reads to each contig. Thus, the transcript for which a log 2 ratio equal to or greater than one was assumed as a B representative sequence. For instance, a single copy B-located sequence, which represents two copies in a 0B diploid genome, will have an extra third copy in 1B genome due to the B chromosome. Whereas a 2B genome will have two extra copies, therefore the log 2 (2) = 1, so we chose to use this value as threshold to extract the B-localized CDS.
Repeats and genes annotation of B chromosome blocks
The B-blocks and B-located CDS were first annotated for repetitive DNAs using RepeatMasker 4.0.3 [79]. The repeats were masked using the reference database of metazoa. We assayed for under-representation of TE super-families using the equation: TE = % of TE family in the genome × 100 / total repeats content in the genome, as described by Mcgaugh et al. [47]. We also performed a comparative repeats composition analysis between A and B chromosomes. Results were parsed by Perl script to depict the relative abundance of repeat classes using the RepeatMasker outfiles. The repeat landscapes were generated with the RepeatMasker "calc-Divergence-FromAlign.pl" and "createRepeatLandscape.pl" utility scripts.
We also annotated B-blocks and B-located CDS to search for genes by comparing them to the reference gene sets of close species downloaded from NCBI databases. The reference gene sets consists of A. mexicanus for Astyanax species and Drosophila melanogaster for A. flavolineata. These references were selected on the basis of the complete representation of genes and high-quality chromosome level assembly. We calculated the integrity score (0-100%) and gene length of all B-genes found in the B-blocks by combining all DNA pieces related to the same gene (each "piece" is a different gene length in each list of blocks). Blocks with integrity scores < 50% mean that they are highly fragmented or incomplete genes. The higher integrity score, the more probable intactness of genes is expected. The identity percentage was calculated comparing the length of identified genes on Bs versus the total corresponding gene length in the annotation of the reference genomes. The total gene length was recovered by the sum of all pieces. The integrity percentage of each B-gene was calculated comparing its length to the corresponding gene length in the annotation of the reference genomes. Finally, the integrity percentage for each gene was determined and the genes were categorized in different groups (from 0 to 100%) on the basis of integrity percent. The remaining CDS that were neither aligned to genes nor repeats were termed non-annotated or unknown sequences.
Quantitative real-time PCR (qPCR) and fluorescent in situ hybridization (FISH)
To validate B sequences identified by bioinformatics analysis, we performed polymerase chains reaction (PCR) and used the amplified PCR products as FISH probes. Primers for selected sequences were designed by NCBI/Primer-Blast (https://www.ncbi.nlm.nih.gov/tools/ primer-blast/) and PrimerQuest (https://www.idtdna. com/Primerquest/Home/Index) tools. Primer quality was evaluated by PCR Primer Stats program (http:// www.bioinformatics.org/sms2/pcr_primer_stats.html). Primers designed for FISH probes and qPCR experiments for TEs and B blocks of A. mexicanus and A. correntinus are listed in Supplementary Table S2. Randomly selected B-block sequences were used to design primers for qPCR to confirm the genomic data and relative abundance on the B chromosome of A. mexicanus. Genomic DNA from each of two individuals (total six sample triplicates) containing 0B, 1B and 2B chromosomes were diluted to 40 ng/ μL and used as a template to measure the gene dose by CT method of relative quantification [80]. We selected the pde4ca gene as a reference, which resides on A chromosomes and thus has the same copies on both B-and B+ genomes. The gene dosage ratio (GDR) was calculated by comparing the mean CT values of both target sequences (blocks) and reference gene according to Valente et al. [36]. The experiment qPCR was performed on StepOne Real-Time PCR Systems (Life Technologies, Carlsbad, CA) with cycling conditions; 95°C for 10 min, 45 cycles of 95°C for 15 s, and 60°C for 1 min. The dissociation curve was observed to confirm the specific amplification of the PCR products.
The FISH probes were labeled with Digoxigenin-11-dUTP (Sigma) and stringent conditions were applied to perform FISH according to the protocol of Pinkel et al. [81]. Slides were prepared by dropping 10 μL of chromosomes suspensions and subsequently treated with RNAse. Different conditions were optimized for each probe during pre-hybridization washing steps and denaturation of chromosomal DNA was performed in 70% formamide for 15 s at 65°C. The hybridization mix containing 10% dextran sulphate, 2× SSC, 50% formamide and labeled probe was denatured for 15 s at 65°C and dropped on denatured chromosomes for overnight hybridization at 37°C. The post-hybridization washing steps were adjusted for each probe (from 3 to 5 min) and detection of probes was performed with digoxigenin-rhodamine (Roche), followed by staining of slides with DAPI (4′,6-diamidino-2-phenylindole, Vector Laboratories). The microscopic examination of the slides was done in an Olympus BX61 optical microscope. Metaphase images were taken on an Olympus DP72 and optimized using GIMP (GNU image manipulation program).
Sequence analysis of microdissected B chromosomes in multiple species
In addition to the whole genome analysis of our candidate sequenced species, we included four additional species to test our hypothesis about the B chromosome evolution. For these additional species, NGS Illumina data for microdissected B-chromosomes of Eyprepocnemis plorans (grasshopper) [82], Lates calcarifer (Asian seabass fish) [83], Apodemus flavicollis and Apodemus peninsulae (mouse) [84] were downloaded from the NCBI-SRA database. We analyzed these Bs because the genomic composition, including genes search, gene ontologies as well as repeat annotation of these microdissected Bs, have not been previously performed. The NGS reads with a quality score < 20 bp were removed using FASTX-Toolkit, adapter sequences and low quality bases were trimmed using the cutadapt pipeline [85] and Trimmomatic [67]. Clean reads were mapped to the respective assembled reference genomes using Bowtie2 with default parameters. The reference genomes consist of L. calcarifer [83], Locust migratoria [86] and Mus musculus (GRCm38.p6) genome [87] assemblies, which were retrieved from the NCBI/Genome database. Successfully mapped reads were chained together across gaps < 10 kb to form B chromosome pseudo-scaffolds. Pseudo-scaffolds were assembled using CAP3 [88] to remove redundancy and the generated contigs were manually checked to reduce potential mis-assemblies. The microdissected B chromosome assemblies were performed on the basis of the pseudo-scaffolding strategy as proposed by Vij et al. [83]. The assembled B microdissected chromosomes were then compared to their reference gene annotation sets to identify their respective gene contents. The reference set of genes was retrieved from the Ensembl browser (https://www.ensembl.org/ index.html) and we used BLASTn [89] for homologous gene annotation. The references consisted of Gasterosteus aculeatus, Drosophila melanogaster, and Mus musculus for B chromosome analyses of L. calcarifer, E. plorans and Apodemus, respectively, selected on the bases of completeness of gene annotations.
Functional annotations and gene ontologies enrichment analysis
We used the ViSEAGO package [90] of R (Bioconductor) to perform the following analysis. First we retrieved the list of B chromosome genes for each species from the Blastn output with the best BLAST hit having at least 200 base pairs of the query sequences overlapped with respective reference gene. We then downloaded a complete annotation database of all genes of the reference species from Ensembl. Through ViSEAGO, we conducted a functional genomics analysis of both the single and multiple sets of B chromosome genes against the complete set of reference genes as a background set of genes. The latest version of gene ontologies (GO) databases were loaded in R for each species from Ensembl and functional enrichment analysis were performed using Fisher's exact test [91].
In this analysis, list of B chromosome genes of each species was compared to the background set of entire set of genes in the reference genome. The background sets were retrieved from Ensembl using Biomart in Bioconductor. GO terms were obtained based upon the Pvalues, which represent the degree of independence between related terms. Tables of results, summarizing the functional enrichment tests, were obtained for each species. The enriched GO were grouped together on the basis of semantic similarity (SS) according to their topological positions and annotations in GO graph. The IC value, which is the negative log probability of occurrence of a GO term was computed and clustering of enriched GO terms was performed using the graph based method of Wang et al. [92]. Both single and comparative heatmap plots were graphed with -log10(p-value) from the enrichment statistical tests and IC value of the GO clusters to profile the functional overview and biological interpretations of the B chromosomes. The enriched terms were organized in clusters with respect to their similar topologies and dendrogram in way so that the GO terms share the common functions. A comparison of GO terms across the Bs of multiple species was also performed to reveal the common functions shared among the genes residing on Bs, and the results were plotted as upset graph in R.
Epigenomics profiling of the B chromosome in A. mexicanus using bisulphite sequencing data The methylation status of the micro-B sequences in A. mexicanus was assessed using the whole genome bisulfite sequencing data generated for this species recently by Gore et al. [45]. The data was retrieved from NCBI SRA (accession: GSE109006). This sequencing was performed from the eye tissues of cave fish, A. mexicanus on an Illumina HiSeq2500. We trimmed and filtered the low quality reads from the raw sequences using Trimmomatic and considered the high quality reads for alignments. We used the Bismark tool [93] to map the bisulphite reads against our B-blocks as a reference. First the fasta sequences of the B-blocks of A. mexicanus were indexed and converted into bisulfite sequences. In the second step, the high quality bisulfite reads were aligned to the B blocks to output the SAM file and methylation call report. In the third step, methylation information was extracted from alignment output by running methylation extractor of the Bismark software.
Comparative genomics and rearrangements detection
To investigate the genomic differences between the Band B+ genomes, we performed the whole genome alignments of our de novo assemblies using nucmer in MUMer package [94]. Both B+ and B-genomes of A. mexicanus were mapped with "--maxmatch, -c, -b, and -l" options of nucmer to balance and resolve alignments. We filtered the alignments with the "delta filter" script of the MUMer and the output filtered files were parsed in the tab delimited files with the "show coords" tool. The B-and B+ assemblies were selected as reference and query genomes, respectively, for the identification of rearrangements.
Genomic rearrangements were identified using syRI -Synteny and Rearrangement Identifier [46] with default parameters. The unplaced scaffolds of both assemblies were merged as pseudoscaffolds and the chromosome IDs were renamed and formatted before syRI. The output files were parsed using custom bash commands and the rearrangements plotted as circos graphics using Clico FS [95]. The bar graphs and violin plot were generated in R. To reveal the homology of the B chromosome sequences of A. mexicanus with A chromosomes and identify their ancestral sequences, we compared the B chromosome of A. mexicanus to the reference. To find putative regions of homology between ancestral sequences of B blocks, we identified colinear regions of sequence similarity to infer synteny and generated dotplots of the alignments. For this analysis we chose the largest blocks with size greater than 2 kb and did a comparison using using CoGE Syn-Map [96] to identify the evolutionary genomic patterns of the B chromosome. Different syntenic patterns were interpreted according to the dotplot explanation examples given in the CoGepedia (https://genomevolution. org/wiki/index.php/Syntenic_comparison_of_Arabidop-sis_thaliana_and_Arabidopsis_lyrata) [97]. We also compared the B+ de novo genome of A. mexicanus with the reference hard masked genome containing only CDS to reveal the syntenic patterns.
Additional file 2: Table S1. De novo genome assemblies and their statistics. Table S2. List of primers of representative blocks used in qPCR and FISH mapping. Table S3. Summary of analyzed data used for microdissected Bs assemblies. Table S4. List of 10 common functions shared among the Bs of all seven analyzed species.
Additional file 3: Figure S1. A workflow of steps applied in the present study during the procedure of genomics analyses of B chromosomes in different species. Figure S2. Coverage plots of B-blocks of A. correntinus with remarkable difference in the reads coverage between 0B and 1B samples. Figure S3. Coverage plots of B-blocks of A. flavolineata with remarkable difference in the reads coverage between 0B and 2B samples. Figure S4. Identification of B chromosome genomic blocks (A) and their repeats contents (B) in A. flavolineata. Figure S5. Identification of protein-coding genes located in B chromosomes of the A. mexicanus, using the number of mapped reads that map to the CDSs found in the transcriptome, in the 0B (X axis) and 2B (Y axis). Each dot represents a coding sequence with only those labeled that recorded the log 2 greater than 1.5. The plot is limited for 800 mapped reads to optimize the visualizations. Figure S6. Identification of protein-coding genes located in B chromosomes of the A. correntinus, using the number of mapped reads that map to the CDSs found in the transcriptome, in the 0B (X axis) and 1B (Y axis). Each dot represents a coding sequence with only those labeled that recorded the log2 greater than 1. The plot is limited for 800 mapped reads to optimize the visualizations. Figure S7.
Coverage plots of (representative) coding sequences detected on the B chromosome of A. mexicanus using Log base 2 ratio. Each plot compares the reads depth of the transcript between 0B and 2B. Figure S8. Coverage plots of (representative) coding sequences detected on the B | 2020-09-24T13:10:56.866Z | 2020-09-23T00:00:00.000 | {
"year": 2020,
"sha1": "01f73867654e5fb43685eecaa49a138585e9f36a",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-020-07072-1",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "b4aeef850120a021bd5fce5f1097843c004c6d99",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
237947762 | pes2o/s2orc | v3-fos-license | Biased Survival Predictions When Appraising Health Technologies in Heterogeneous Populations
Introduction Time-to-event data from clinical trials are routinely extrapolated using parametric models to estimate the cost effectiveness of novel therapies, but how this approach performs in the presence of heterogeneous populations remains unknown. Methods We performed a simulation study of seven scenarios with varying exponential distributions modelling treatment and prognostic effects across subgroup and complement populations, with follow-up typical of clinical trials used to appraise the cost effectiveness of therapies by agencies such as the UK National Institute for Health and Care Excellence (NICE). We compared established and emerging methods of estimating population life-years (LYs) using parametric models. We also proved analytically that an exponential model fitted to censored heterogeneous survival times sampled from two distinct exponential distributions will produce a biased estimate of the hazard rate and LYs. Results LYs are underestimated by the methods in the presence of heterogeneity, resulting in either under- or overestimation of the incremental benefit. In scenarios where the overestimation of benefit is likely, which is of interest to the healthcare provider, the method of taking the average LYs from all plausible models has the least bias. LY estimates from complete Kaplan–Meier curves have high variation, suggesting mature data may not be a reliable solution. We explore the effect of increasing trial sample size and accounting for detected treatment–subgroup interactions. Conclusions The bias associated with heterogeneous populations suggests that NICE may need to be more cautious when appraising therapies and to consider model averaging or the separate modelling of subgroups when heterogeneity is suspected or detected. Supplementary Information The online version contains supplementary material available at 10.1007/s40273-021-01082-x.
Introduction
Health technology assessment (HTA) agencies, such as the National Institute for Health and Care Excellence (NICE) in England and Wales, assess the clinical and cost effectiveness of health technologies based on the appraisal of supporting clinical evidence, usually from at least one clinical trial, which is then incorporated alongside a series of assumptions into an economic model.
Clinical trials often demonstrate heterogeneity in treatment efficacy among patients, with some patients receiving less or even no clinical benefit [1,2]. This heterogeneity may increase when converting clinical benefits into quality-adjusted life-years (QALYs), which are used in an attempt to present a level playing field on which the effectiveness of all treatments for all diseases can be judged. QALYs are usually obtained by estimating the expected number of life-years (LYs) and multiplying by a health utility value that captures the expected quality of health a patient is expected to experience whilst they remain alive, which may vary as patients pass through different stages of disease, though other methods are possible.
For severe, terminal diseases such as advanced cancers, the goals of treatments are to delay disease progression and/or extend survival as the prospect of being cured is unlikely. Treatments for such diseases usually report clinical outcomes based on their relative efficacy using a hazard ratio, whereas the relative benefit will be measured using the gain in QALYs for cost-effectiveness assessments. A hazard ratio uses only observed data, whereas LYs often involve extrapolations.
This use of differing scales between clinical and cost effectiveness assessments means that heterogeneous treatment effects are even harder to identify. A treatment could appear more clinically effective for a subgroup of patients compared with the complement in terms of a hazard ratio yet offer less benefit in the subgroup when examining the LY/QALY benefit because of the influence of prognostic factors. For example, a subgroup and complement may have hazard of 0.5 and 0.25, with average LYs of 2 and 4, respectively. A treatment with a hazard ratio of 0.7 in the subgroup and 0.8 in the complement might suggest the treatment has a stronger effect in the subgroup; however, the LYs are 2.86 and 5 when the subgroup and complement are treated, meaning the complement population gains 1 LY and the subgroup gains 0.86. However, the reverse could also be true, with different clinical responses in the subgroup and its complement resulting in equivalent LY benefits. Factors that may clinically be prognostic, such as age, could become treatment-effect modifiers when appraising a therapy from a health economic perspective.
Given the increasing pressure on healthcare budgets, it is vital that the implications of current methods are fully understood to assist decision makers and ensure fair access to health technologies.
Our aims are to demonstrate the relationship between hazard ratios and LY efficacy estimates and to explore the ability of current methodology to accurately estimate LYs when the population includes a subgroup with heterogeneity in overall survival and treatment effect compared with its complement.
Method Overview
We undertook a series of simulations capturing seven distinct scenarios, each replicating follow-up for a time-toevent outcome from a phase III clinical trial at the point of appraisal by an HTA agency. Each scenario contained a different combination of prognosis and treatment effect for a subgroup and complement population, with half the trial population featuring in the subgroup. Five methods of estimating LYs were implemented, all based on a set of candidate parametric models.
Simulation Method
An overview of the simulation is provided in Table 1. The survival times for each subgroup/complement and treatment/ control group were sampled from different exponential distributions reflecting plausible hazard ratios of treatment and prognostic effect. The seven scenarios considered (Table 2) were as follows: • Scenario 0 serves as a reference point and features no difference in prognosis or treatment effect between the subgroup and complement. • Scenario 1 models no difference in prognosis between the subgroup and complement, with a treatment effect only in the subgroup. • Scenario 2 features a treatment effect only in the subgroup, but the subgroup has a worse prognosis than the complement. • Scenario 3 models a subgroup with a worse prognosis, but the treatment has an equal hazard ratio of effect across the subgroup and complement. • Scenario 4 features a subgroup with a worse prognosis, but the treatment only has an effect in the complement. • Scenario 5 models a subgroup with a worse prognosis, and the hazard ratio of treatment effect is slightly stronger in the subgroup than in the complement. • Scenario 6 features a subgroup with a worse prognosis, whereas the treatment has a positive effect in the subgroup and a slight negative effect in the complement.
Our sample size for each scenario was based on assumptions of an overall hazard ratio of 0.75, 90% power, and a 5% alpha and did not consider treatment effect interactions. The probability of an event in the follow-up period was 0.60, and probability for withdrawing was 0.05, giving a sample size of 896 rounded up to the nearest multiple of 8 to allow for consistently sized subgroups in every simulation for each scenario, using Stata's 'power cox' command.
We replicated trial follow-up by generating censoring times using a Gompertz distribution (shape = 3.5, rate = 0.00005). This gave an average censoring time of 3 years, with very few patients censored before 2 years or beyond 4 years of follow-up (see Fig. A1 in the electronic supplementary material [ESM] for an example). Our scenarios had varying power, with the hazard rates used suggesting mortality rates of 41-78% at 3 years. All survival data were generated and survival models fitted using the 'flexsurv' package in R [3], with post-simulation analysis conducted in Stata 16.
We fixed the proportion of the subgroup at 0.5 of the whole population but anticipated that our results would generalise to subgroups of all proportions. Figure 1 demonstrates the pooling of subgroup and complement survival curves, whilst Fig. 2 shows parametric curves fitted to a heterogeneous population. The true expected LY for each heterogenous population was calculated using the LY of the respective component population restricted to the first 30 years, weighted by their prevalence. For example, LYs for one arm = where 1 and 2 are the hazard rates in the subgroup and complement, respectively, and p and (1 − p) are the respective prevalences.
Analytical Method
We fitted eight parametric models (exponential, Weibull, log-normal, log-logistic, gamma, generalised gamma, Gompertz, and generalised F) independently to each arm for each set of simulated trial follow-up data, ignoring subgroup effects, and estimated LYs from the extrapolation of these models. We removed implausible models by assessing each model's prediction of survival at 5 and 10 years. Estimates were considered implausible if they fell outside a ± 7.5 percentage unit window around the true 5-year survival percentage or a ± 5% window around the 10-year value when survival rates are expected to be much lower. These windows were consistent with the variation in predictions made by clinical experts in NICE technology appraisals from the authors' experiencev. We considered three distinct approaches to obtaining a LY estimate. First, we chose the single best-fitting models for each arm independently according to Akaike information criterion (AIC) and Bayesian information criterion (BIC), despite there being limitations with this approach [4]. This means different parametric models could be chosen for each arm, which is only encouraged by the NICE technical support document (TSD)-14 when justified by "clinical expert judgement, biological plausibility, and robust statistical analysis" [5]. Second, in keeping with NICE TSD 14, we selected the model with the combined lowest AIC/ BIC for both arms by adding AIC/BIC across both arms, the approach we believe to be most consistent with current practice [6,7]. Finally, we calculated the mean average of the LY estimates from all plausible models, as presented by Gallacher et al. [8], which generally outperformed information criteria-based weights. For reference, we measured the area under the Kaplan-Meier curve for each simulation, estimating the LYs as they would have been had follow-up been complete without any censoring.
We fitted two Cox models in each simulation [9], the first only estimating a treatment effect, and the second estimating treatment and subgroup effects and a treatment-by-subgroup interaction effect.
Finally, we explored the impact of doubling the trial sample size and of fitting separate parametric models to the subgroup and complement populations of the treatment and control arms whenever a significant interaction term was detected by a Cox model at the 0.05 significance level threshold.
Main Scenarios
Scenario 0 served as a reference point, demonstrating the performance of the different approaches when there is no heterogeneity within either arm. There is little to distinguish between the methods of single model selection, with each showing almost no bias (Table 3). Even in the absence of heterogeneity, few estimates of LYs from the fitted models were within 10% of the true LYs, with the highest being 27%. LY estimates from complete Kaplan-Meier follow-up were within this range for 39% of simulations.
Scenario 1 applied the hazard ratio only in the subgroup, with no prognostic differences. The methods tended to underestimate incremental LYs (Table 3) because of the benefit of the intervention was underestimated (Fig. 3). In just over one-half of simulations, neither a significant treatment effect nor a significant treatment subgroup interaction were detected (46 and 43%, respectively; Table 3).
Scenario 2 applied the hazard ratio only to the subgroup, which also had a worse prognosis than the complement. The methods underestimated LYs in both arms but overestimated incremental LYs. BIC-based selection was associated with the highest bias. Significant hazard ratios and interaction terms were detected in just over one-half of simulations (56.5 and 54.7%). Scenario 3 applied the hazard ratio to the whole population, but the subgroup had a worse prognosis. The LYs for the intervention were generally underestimated, leading to underestimation of the incremental benefit. A significant treatment effect was detected in almost all simulations (97.8%), but a significant interaction term was rare (5.2%).
Scenario 4 featured a hazard ratio in the complement, whereas the subgroup population had a worse prognosis. This scenario was analogous to scenario 2, and the results were consistent with the switch in treatment efficacy. Incremental efficacy was underestimated, with the BIC-based methods being the most severe. A significant treatment effect was not detected in the majority of simulations (43.7%), but a significant interaction was (52.9%).
Scenario 5 applied a hazard ratio to both the subgroup and the complement, but this was stronger in the subgroup, which had a worse prognosis. LYs were underestimated for both arms by all methods, but these largely cancelled out to provide unbiased estimates of incremental benefit. A significant treatment effect was detected in most simulations (91.4%), but a significant interaction effect was not (11.8%).
Scenario 6 applied a hazard ratio of positive treatment effect in the subgroup, which also had a worse prognosis, along with a negative treatment effect in the complement. Methods tended to underestimate LYs for both arms, but this was more considerable in the control arm, leading to overestimation of the incremental benefit. The majority of simulations for this scenario did not detect a significant treatment effect in the whole population (35.6%) but did detect a significant treatment subgroup interaction (75.8%).
The optimal method varied by scenario, and there was little to distinguish between model averaging and the AICbased methods in terms of bias and accuracy. Examination of the distributions of the results (Fig. 3) suggested that estimates coming from model averaging were less skewed and so may be more reliable.
LY estimates for all methods were most accurate when there was little or no heterogeneity within either arm (scenarios 0, 3, and 5), with a noticeably higher percentage of estimates falling within 10% of the true incremental LYs; however, they were all outperformed by complete Kaplan-Meier follow-up. The LY estimates from complete follow-up still had high variability, with the percentage of LY estimates that fell within ± 10% of the true LYs varying across scenarios from 9 to 38%.
Estimates of all approaches were most accurate (least biased and highest percentage within 10% of true value) in scenarios when little or no heterogeneity was present (scenarios 0, 3, and 5). Data The ESM contains results of LY estimates from each of the parametric models (Tables A2-A3, Fig. A3). Fitting to the censored follow-up of combined populations of two heterogeneous exponential groups, the exponential, Weibull, and gamma models on average underestimated survival, whereas the generalised F, log-normal, and loglogistic overestimated, though scenario 0 suggests this may be due to poor fit rather than heterogeneity. The generalised gamma and Gompertz had a lot of variation but were generally unbiased. Table 4 and Fig. 4 contain the results of exploratory analyses examining (1) the effects of increasing the sample size to 896 per arm, (2) fitting separate parametric models to the subgroup and complement population when a significant treatment subgroup interaction was detected, and (3) both (1) and (2) simultaneously. Scenario 2 was chosen as it modelled a simple interaction that was already often detected in the original scenario. However, the results will generalise to all scenarios of heterogeneous effects.
Additional Analyses
Increasing the sample size increased the detection of both significant treatment effects and treatment subgroup interactions. It also slightly reduced the bias for the methods of model selection.
Fitting separate models when significant interactions were detected reduced the bias from all methods. When combined with the larger sample size, all methods produced unbiased LY estimates. This approach relies on correct identification of subgroup interactions, may increase variance where interactions are falsely identified, and cannot be applied when subgroups are not identified.
Discussion
Through simulation, we demonstrated the performance of current methodology used in HTA in estimating treatment benefits. We assessed the bias of this methodology when heterogeneity was present in censored follow-up. Across every scenario, we showed that the methods had problems accurately predicting LYs, underestimating where heterogeneity was present. When estimates of LYs in two treatment groups are used to estimate incremental LYs, this can result in either under-or overestimation of the true benefit, varying by scenario. This issue of biased estimation is therefore a concern to both healthcare providers/decision makers and pharmaceutical manufacturers.
These simulations are supported by an analytical result that when fitting a single exponential model to immature follow-up of a heterogenous population made up of two components, the survival times of which come from two distinct exponential distributions, the fitted model will always overestimate the true hazard rate, thus underestimating the mean survival time.
Defining the true average hazard rate as where y is the hazard rate in the subgroup, of size np , and z is the hazard rate in the complement, of size (1 − p)n . Assuming all patients begin follow-up at the same time, and so those that remain event free are all censored at the same point, we denote the estimated hazard rate at time t as It can be shown that � (t) > for all t , so that an estimate of the LY based on this will be an underestimate. This result can be generalised to show that the hazard is overestimated for any distribution of censoring times, relaxing the assumption on recruitment and censoring times. A detailed proof is presented in the ESM.
It is common when almost all patients have died to estimate LYs from the Kaplan-Meier curves instead of parametric models [6,7]. This approach avoids debate on the choice of preferred extrapolation. We showed that, across all six scenarios, complete follow-up without any censoring yielded an estimate of mean survival that deviated at least 10% from the true value in the majority of simulations (up to 91%). This raises the question of whether mature follow-up from clinical trials is sufficiently reliable for decision makers, especially when sample sizes are small. We recommend that the uncertainty in the Kaplan-Meier estimates is considered, perhaps through the 95% confidence interval curves. As access to therapies is ascertained not just on clinical efficacy but also on cost effectiveness, greater consideration of the economic assessment should be accounted for in the trial design and data collection. Cost-effectiveness analysis protocols should be established during the trial development stage to promote transparency. This may be a challenge to pharmaceutical manufacturers, as methods of assessing cost effectiveness vary by country. Consideration should be given not only to powering trials to detect clinically meaningful differences at key follow-up milestones but also to accurately capturing patient survival for a single arm [10], leading to increased confidence in the output from extended follow-up. Our simulations demonstrated the challenge of estimating cost effectiveness from a study powered for a clinical outcome. In cases where it may be appropriate to make treatment available for only a subgroup of patients, it is critical that the correct group are identified. When such discrimination is not appropriate, it remains imperative that these groups are identified to accurately estimate the treatment benefit in a heterogeneous population. Heterogeneity could also be more prevalent in routine care than in clinical trials, for example where populations tend to be underrepresented in research [11], leading to differences between actual and predicted benefits. Scenarios 3 and 5 featured varying treatment benefits on the LY scale between the subgroup and complement that was not reflected on the hazard ratio scale. Scenario 5 was more complex in that the hazard ratio suggested a stronger benefit in the subgroup, but the better prognosis of the complement meant that the complement gained more LYs. This scenario could cause confusion if attempting to prioritise patient access.
The often worse performance of the BIC-based methods was perhaps due to their preference for models with the fewest parameters, which may have been the worst at capturing the heterogeneity.
In four scenarios, the analytical methods underestimated the incremental LYs. This means the healthcare provider obtains better value for money than was anticipated at the point of appraisal and that the pharmaceutical manufacturer does not maximise their potential reimbursement. It is likely that pharmaceutical manufacturers already watch for these potential conditions and take steps to minimise their occurrence. It is not necessarily the priority of the healthcare provider to reduce the bias in these scenarios. However, in scenarios 2 and 6, the incremental LY was overestimated, quite considerably by some methods, which should be of concern to the healthcare provider. Consequently, the avoidance of these scenarios is less of a priority to the pharmaceutical manufacturer and so are potentially more likely to occur. These scenarios both featured a treatment effect only in a prognostically worse subgroup. In both, the bias was reduced when LY estimates were taken by either using arm-independent AIC selection or taking the average of all plausible models, compared with obtaining LYs through one of the other methods. Given the skewed nature of the independent AIC selection in these scenarios, taking the average of all plausible models appears to be more reliable, also featuring a higher percentage of LY estimates within 10% of the true range. Hence, we recommend that decision makers such as NICE encourage the presentation of analyses using the average of all plausible models where the treatment effect may interact with a prognostic factor, or model subgroup populations separately if a significant interaction is detected.
Such an approach is not without risk since phase III trials are not usually designed with the power to detect efficacy among known prognostic or potential treatment-modifying subgroups. Any observed difference in treatment effects could occur by chance and may lead to unnecessary restrictions being applied, resulting in unfair pricing and unfair access to interventions.
To make strides towards personalised medicine, NICE could consider offering greater incentivisation for treatments where the developer has identified novel patient subgroups, which will likely incur additional costs compared with developing a non-stratified therapy. This would ensure patients receive the best therapy for them and avoid treatment prices being based on potentially biased estimates [12].
If heterogeneity is suspected, but not detected or attributable to any known covariate, fitting separate models for different subgroups is not an option. It is possible that flexible parametric approaches [13] or mixture cure models [14] might better capture the heterogeneity than would traditional parametric approaches; however, these were beyond the scope of this study, and further investigation is needed. Data appearing to follow a complex hazard rate may be a consequence of a heterogeneous population containing subgroups that each have a much simpler underlying hazard rate.
A major strength of our study was that it captured a range of interesting scenarios of varying subgroup and complement treatment efficacies representative of clinical trial follow-up used for appraising the cost effectiveness of therapies by agencies such as NICE. These scenarios could potentially feature in any and every technology appraisal. However, our study did have limitations. It assumed that the clinical predictions of efficacy were unbiased, whereas this may not be the case in practice. The size of the bias is certainly affected by sample size, subgroup prevalence, and length of followup, which were not explored in detail in this study.
Our source distributions were all exponential, which often led to the exclusion of the log-normal and log-logistic curves when plausibility was assessed. We anticipate our results to be generalisable to scenarios beyond those based on the exponential distribution, wherever heterogeneous populations exist, regardless of the underlying distribution. Our results are relevant to not only technology appraisals but also published cost-effectiveness studies, where methods of extrapolating survival are similar [15,16]. Spline models were not included in this simulation because of their additional manual specification when selecting knot frequency and location but have been shown to fit well to trial and registry data where sample sizes are larger than in typical clinical trials [17,18]. Similarly, cure models incorporating external data have been shown to perform well but could not be applied in this simulation [19].
Our study demonstrated that the relationship between clinical benefit and LY benefit is not always linear, which can raise challenges when valuing treatments. Our results were consistent with and may explain the observations of Ouwens et al. [20], who reported that parametric models fitted to trial follow-up underestimated mean survival compared with more mature follow-up. More work exploring further ways of discovering and accounting for this heterogeneity is necessary to increase the likelihood of conducting a balanced assessment of cost effectiveness.
Conclusion
Our study presented simulated trial follow-up for seven scenarios of varying combinations of treatment effects and prognoses where these differ in different parts of the population, where the information is typical of that used to assess the cost effectiveness of therapies.
We demonstrated how existing methods cope poorly with censored data containing heterogeneous treatment effects, which can either under-or overestimate the incremental LYs. Taking the average LY estimate from all plausible models performs well in scenarios where the incremental LYs are likely to be overestimated, and we encourage decision makers to consider this approach in future appraisals.
The high variability of estimates present in observed follow-up suggests that mature follow-up may not be reliable for estimating mean survival, particularly when sample sizes are small. We demonstrated the improved LY estimation obtained by increasing the sample size and modelling subgroup data separately when significant interactions were detected.
Declarations
Funding The open access fee for this article was paid by Warwick Evidence who are funded by NIHR award 14/25/05. Ethics approval Not applicable.
Consent for publication Not applicable.
Availability of data and material The code for this paper can be accessed online (https:// github. com/ daniel-g-92/ heter ogene ity).
Author contributions DG generated the research idea, designed and conducted the simulation study, and drafted the manuscript. PK and NS helped develop the study design and contributed to the final manuscript.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by-nc/4. 0/. | 2021-09-28T14:20:40.200Z | 2021-09-28T00:00:00.000 | {
"year": 2021,
"sha1": "275c9691a26aba829c75d32904d9c41a5a77a84f",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40273-021-01082-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "275c9691a26aba829c75d32904d9c41a5a77a84f",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215827600 | pes2o/s2orc | v3-fos-license | Predicting Online Item-choice Behavior: A Shape-restricted Regression Perspective
This paper is concerned with examining the relationship between users' page view (PV) history and their item-choice behavior on an e-commerce website. We focus particularly on the PV sequence, which represents a time series of the number of PVs for each user--item pair. We propose a shape-restricted optimization model to accurately estimate item-choice probabilities for all possible PV sequences. In this model, we impose monotonicity constraints on item-choice probabilities by exploiting partial orders specialized for the PV sequences based on the recency and frequency of each user's previous PVs. To improve the computational efficiency of our optimization model, we devise efficient algorithms for eliminating all redundant constraints according to the transitivity of the partial orders. Experimental results using real-world clickstream data demonstrate that higher prediction performance is achieved with our method than with the state-of-the-art optimization model and common machine learning methods.
I. INTRODUCTION
N OWADAYS, a growing number of companies are operating e-commerce websites that allow users to browse and purchase a variety of items via the Internet [45]. In this situation, there is great potential value in analyzing users' item-choice behavior from clickstream data, which is a record of users' page view (PV) history on an e-commerce website. If we grasp a user's purchase intention behind the PV history, we can lead the user to a target page or design a special sales promotion. This gives companies an opportunity to build profitable relationships with website users [22], [33]. Companies can also use the clickstream data to enhance the quality of operational forecasting and inventory management [18]. Meanwhile, users often find it difficult to select an appropriate item from the plethora of choices presented by an ecommerce website [1]. Analyzing users' item-choice behavior can improve the performance of recommender systems that assist users to discover new and worthwhile items [20]. For all of these reasons, a number of prior studies have investigated N. Nishimura was with the Graduate School of Systems and Information Engineering, University of Tsukuba, Ibaraki 305-8577, Japan. He is now with Recruit Lifestyle Co., Ltd., Tokyo 100-6640, Japan (e-mail: nishimura@r.recruit.co.jp).
N. Sukegawa is with the Department of Information and Computer Technology, Tokyo University of Science, Tokyo 125-8585, Japan (e-mail: sukegawa@rs.tus.ac.jp).
clickstream data from various perspectives [7]. In particular, we focus on closely examining the relationship between users' PV history and their item-choice behavior on an e-commerce website.
It has been demonstrated that the recency and frequency of a user's past purchases are critical indicators for purchase prediction [13], [46] and sequential pattern mining [9]. In light of this observation, Iwanaga et al. [19] developed the shape-restricted optimization model specialized for estimating the item-choice probabilities from the recency and frequency of each user's previous PVs. This method creates a two-dimensional probability table consisting of item-choice probabilities for all recency and frequency combinations of each user's previous PVs. Nishimura et al. [32] employed a latent-class modeling to integrate item heterogeneity into the two-dimensional probability table. Their experimental results demonstrated that higher prediction performance was achieved with the two-dimensional probability table than with common machine learning methods, namely, logistic regression, kernelbased support vector machines, artificial neural networks, and random forests. It is notable, however, that each user's PV history is reduced to the two dimensions (i.e., recency and frequency) by means of the two-dimensional probability table. Such a dimensionality reduction may markedly decrease the amount of information contained in the PV history about users' item-choice behavior.
This paper is focused on the PV sequence, which represents a time series of the number of PVs taken by a user-item pair in each period. In contrast to the two-dimensional probability table, the PV sequence allows us to retain detailed information contained in the PV history. However, since there are a huge number of possible PV sequences, it is extremely difficult to accurately estimate item-choice probabilities for all those PV sequences. To overcome this difficulty, we propose a shape-restricted optimization model, where the monotonicity constraint is imposed on item-choice probabilities based on a partially ordered set (poset) specialized for PV sequences. Although this optimization model contains a huge number of constraints, all redundant constraints can be eliminated according to the transitivity of partial order. To accomplish this, we compute a transitivity reduction [2] of a directed graph representing the poset. We demonstrate the effectiveness of our method through experiments using real-world clickstream data.
The main contributions of this paper are highlighted as follows.
• We propose a shape-restricted optimization model for estimating item-choice probabilities from each user's previous PV sequence. This PV sequence model exploits the monotonicity constraint to provide precise estimates of item-choice probabilities. • We derive two types of posets of PV sequences according to the recency and frequency of each user's previous PVs. Experimental results show that the monotonicity constraint based on these posets greatly enhances the prediction performance of our PV sequence model. • We devise constructive algorithms for transitive reduction specialized for our posets. The time complexity of our algorithms is much smaller than that of general-purpose algorithms. Experimental results reveal that the transitive reduction improves the efficiency both in terms of computation time and memory usage of our PV sequence model. • We verify based on experimental results that higher prediction performance is achieved with our method than with the two-dimensional probability table and common machine learning methods, namely, logistic regression, artificial neural networks, and random forests. The remainder of this paper is organized as follows. Section 2 gives a brief review of related work. Section 3 explains the two-dimensional probability table [19], and Section 4 presents our PV sequence model. Section 5 describes our constructive algorithms for transitive reduction. Section 6 evaluates the effectiveness of our method based on experimental results. Section 7 concludes with a brief summary of our work and a discussion of future research directions.
II. RELATED WORK
This section gives a brief survey of predicting online user behavior and discusses some related work on shape-restricted regression.
A. Prediction of online user behavior
There are a number of prior studies that aim at predicting users' purchase behavior on e-commerce websites [10]. A mainstream research involves predicting the occurrence of purchase for each session by means of stochastic/statistical models [5], [23], [30], [31], [36], [41], [46], whereas this approach gives no consideration to which item to be chosen by users.
Various machine learning methods have been employed for prediction of online item-choice behavior; these include logistic regression [12], [53], association rule mining [37], support vector machines [38], [53], ensemble learning methods [25], [26], [39], [52], [54], and artificial neural networks [21], [47], [50]. Some tailored statistical models have also been proposed; for instance, Moe [29] devised a two-stage multinomial logit model that separates the decision-making process into a item-view decision and a purchase decision. Yao et al. [51] proposed a joint framework consisting of user-level factor estimation and item-level factor aggregation based on the buyer decision process. Borges and Levener [6] employed Markov chain models to estimate the probability of the next link choice of a user.
These prior studies have made effective use of clickstream data in various prediction methods. Additionally, paying attention to time-evolving user behavior is crucial for precise prediction of online item-choice behavior. In light of these insights, we focus on sequences of user PVs to estimate users' item-choice probabilities on e-commerce websites. Moreover, we evaluate the prediction performance of our method by comparison with machine learning methods that are commonly employed in prior studies.
B. Shape-restricted regression
In many practical situations, we know prior information about the relationship between explanatory and response variables. For instance, utility functions are assumed to be increasing and concave from economic theory [28], and option pricing functions are restricted to be monotone and convex from finance theory [3]. Shape-restricted regression fits a nonparametric function to a set of given observations under such shape restrictions (e.g., monotonicity, convexity/concavity, and unimodality) [8], [15], [16], [48].
Isotonic regression is the most commonly used method of shape-restricted regression. In general, the isotonic regression is the problem of estimating a real-valued monotone (i.e., non-decreasing or non-increasing) function with respect to a given partial order of observations [35]. Some regularization techniques [14], [44] and estimation algorithms [17], [35], [43] have been proposed for isotonic regression.
One of the greatest advantages of shape-restricted regression is that the prediction performance of regression models can be improved because overfitting is mitigated by shape restrictions [4]. To take this advantage of shape restrictions, Iwanaga et al. [19] devised the shape-restricted optimization model for estimating item-choice probabilities on e-commerce websites. In line with Iwanaga et al. [19], we propose a shape-restricted optimization model based on order relations of PV sequences to upgrade the prediction performance.
III. TWO-DIMENSIONAL PROBABILITY TABLE
This section gives a brief review of the two-dimensional probability table proposed by Iwanaga et al. [19]. Table I gives an example of a PV history of six user-item pairs. For instance, user u 1 viewed the webpage of item i 2 once each on April 1st and 3rd. We focus on user choices (e.g., revisit and purchase) on April 4th, which is called the base date. For instance, user u 1 chose not item i 2 but item i 4 on the base date. It is supposed for each user-item pair that recency and frequency are characterized by the day of last PV and the total number of PVs, respectively. As shown in Table I, the PV history can be summarized by the recency and frequency combination (r, f ) ∈ R × F , where R and F are the index sets representing the recency and frequency, respectively.
A. Empirical probability table
Let us denote by n rf the number of user-item pairs that have (r, f ) ∈ R ×F . We also set q rf to the number of choices occurred by user-item pairs that have (r, f ) ∈ R × F on the base date. In the case of Table I, the empirical probability table is calculated as where, for reasons of expediency,x rf := 0 for (r, f ) ∈ R × F with n rf = 0.
B. Two-dimensional monotonicity model
It is reasonable to assume that the recency and frequency of user-item pairs are positively associated with users' itemchoice probabilities. To estimate users' item-choice probabilities x rf for all recency and frequency combinations (r, f ) ∈ R × F , the two-dimensional monotonicity model [19] minimizes the weighted sum of squared errors under monotonicity constraints with respect to recency and frequency.
subject to It is notable, however, that different PV histories are often indistinguishable according to the recency and frequency. A typical example is a set of user-item pairs (u 2 , i 3 ), (u 2 , i 4 ), and (u 3 , i 2 ) in Table I; although their PV histories are really different, they have the same value (r, f ) = (3, 3) of the recency and frequency combination. To distinguish between these PV histories, we exploit the PV sequence in the next section.
IV. PV SEQUENCE MODEL
This section presents our shape-restricted optimization model for estimating item-choice probabilities from each user's previous PV sequence.
A. PV sequence
The PV sequence for each user-item pair represents a time series of the number of PVs: where v j is the number of PVs j periods ago; see also Table I. Note that the sequence terms are arranged in reverse chronological order; that is, v j moves back into the past as the index j increases.
Throughout the paper, we express the set of consecutive integers as where m is the maximum number of PVs in each period, and n is the number of considered periods.
Our objective is to estimate item-choice probabilities x v for all PV sequences v ∈ Γ. However, it is extremely difficult to accurately estimate such probabilities because there are a huge number of PV sequences. In the case of (n, m) = (|R|, |F |) = (5, 6) for instance, the number of different PV sequences is (m+1) n = 16,807, whereas that of the recency and frequency combinations is only |R|·|F | = 30. To avoid this difficulty, we shall make effective use of monotonicity constraints on itemchoice probabilities as in the optimization model (2)-(5). In the next section, we introduce three operations underlying the development of monotonicity constraints.
B. Operations based on recency and frequency
It is reasonable from the perspective of frequency that the item-choice probability increases as the number of PVs in a particular period gets larger. To formulate this reasoning, we define the following operation.
Definition 1 (Up). On the domain
For instance, we have Up((0, 1, 1), 1) = (1, 1, 1), and Up((1, 1, 1), 2) = (1, 2, 1). Since the frequency of PVs is increased by this operation, the monotonicity constraint It is inferred from the perspective of recency that more recent PVs have larger effects of increasing the item-choice probability. To formulate this inference, we consider the following operation that moves one PV from an old period to a new period.
C. Partially ordered sets
Let U ⊆ Γ be a subset of PV sequences. The image of each operation is then defined as The following definition states that the binary relation u ≺ UM v holds when u can be transformed into v by repeated application of Up and Move.
Then, the binary relation u ≺ US v holds when u can be transformed into v by repeated application of Up and Swap.
To prove properties of these binary relations, we can use the lexicographic order, which is an well-known linear order [40].
Each application of Up, Move, and Swap makes a PV sequence greater in the lexicographic order. Therefore, we can obtain the following lemma.
The following theorem states that a partial order of PV sequences is derived by the operations Up and Move.
Proof. It is clear from Definition 4 that the relation UM is reflexive and transitive. Suppose that u UM v and v UM u. It follows from Lemma 1 that u lex v and v lex u. Since the relation lex is antisymmetric, we have u = v, which proves that the relation UM is also antisymmetric.
In the same manner, we can prove the following theorem for the operations Up and Swap.
D. Shape-restricted optimization model
Let n v be the number of user-item pairs that have the PV sequence v ∈ Γ. Also, q v is the number of choices provoked by user-item pairs that have v ∈ Γ on the base date. Similarly to Eq. (1), we can calculate empirical item-choice probabilities asx Our shape-restricted optimization model minimizes the weighted sum of squared errors subject to the monotonicity constraint.
where, u ≺ v in Eq. (8) is defined by one of the partial orders ≺ UM and ≺ US . The monotonicity constraint (8) aids in enhancing the estimation accuracy of item-choice probabilities. In addition, our shape-restricted optimization model can be used at a postprocessing step to upgrade prediction performance of other machine learning methods. Specifically, we first compute itemchoice probabilities by using a machine learning method. We next substitute the computed values into (x v ) v∈Γ and then solve the optimization model (7)- (9). Consequently, we can obtain item-choice probabilities corrected by the monotonicity constraint (8). The usefulness of this approach will be illustrated in Section 6.4.
V. ALGORITHMS FOR TRANSITIVE REDUCTION
This section describes our constructive algorithms for transitive reduction to decrease the problem size of our shaperestricted optimization model.
A. Transitive reduction
A poset (Γ, ) can be represented by a directed graph (Γ, E), where Γ and E ⊆ Γ × Γ are the sets of nodes and directed edges, respectively. Each directed edge (u, v) ∈ E in this graph corresponds to the order relation u ≺ v, so the number of directed edges coincides with that of constraints in Eq. (8).
These directed graphs shown in Figs. 1(a) and 2(a) can be created easily.
Now, let us suppose that there are three edges In this case, edge (u, v) is implied by the other edges due to the transitivity of partial order: or equivalently, As a result, the edge (u, v) is redundant and can be removed from the directed graph. A transitive reduction, also known as a Hasse diagram, of a directed graph (Γ, E) is its subgraph (Γ, E * ) such that all redundant edges are removed using the transitivity of partial order [2]. Figs. 1(b) and 2(b) show transitive reductions of the directed graphs shown in Figs. 1(a) and 2(a), respectively. By computing transitive reductions, the number of edges is reduced from 90 to 42 in Fig. 1, and from 81 to 46 in Fig. 2. It is known that the transitive reduction is unique [2].
B. General-purpose algorithms
The transitive reduction (Γ, E * ) is characterized by the following lemma [40].
holds if and only if both the following conditions are fulfilled: A basic strategy of general-purpose algorithms for transitive reduction involves the following steps.
Step 2:The transitive reduction (Γ, E * ) is computed from the directed graph (Γ, E) using Lemma 2. Various algorithms have been proposed so far to speed up the computation of Step 2. Recall that |Γ| = (m + 1) n in our situation. Warshall's algorithm [49] has the time complexity of O((m + 1) 3n ) to complete Step 2 [40]. This time complexity can be reduced to O((m + 1) 2.3729n ) using a sophisticated algorithm for fast matrix multiplication [24].
However, these general-purpose algorithms are clearly inefficient especially when n is very large. In addition, a huge amount of computations are required also for Step 1. To resolve this difficulty, we devise specialized algorithms to directly construct a transitive reduction.
C. Constructive algorithms
Let (Γ, E * UM ) be a transitive reduction of a directed graph (Γ, E UM ) representing the poset (Γ, UM ). Then, the transitive reduction can be characterized by the following theorem.
We show a pseudocode of our constructive algorithm (Algorithm 1) in Appendix B-A. Recalling the time complexity analysis of the breadth-first search [11], one readily sees that the time complexity of Algorithm 1 is O(n(m+1) n ), which is much smaller than O((m+1) 2.3729n ) achieved by the generalpurpose algorithm [24] especially when n is very large.
Next, we focus on the transitive reduction (Γ, E * US ) of a directed graph (Γ, E US ) representing the poset (Γ, US ). Then, the transitive reduction can be characterized by the following theorem. (0, 2, 1) again as an example with (n, m) = (3, 2). As shown in Table III We show a pseudocode of our constructive algorithm (Algorithm 2) in Appendix B-B. Its time complexity is estimated to be O(n 2 (m+1) n ), which is larger than that of Algorithm 1 but is still much smaller than the general-purpose algorithm [24] especially when n is very large.
VI. EXPERIMENTS
The experimental results reported in this section evaluate the effectiveness of our method for estimating item-choice probabilities.
We used real-world clickstream data collected from a Our PV sequence model (7)-(9) using (Γ, UM) SeqUS Our PV sequence model (7) Chinese e-commerce website Tmall 1 . We used the date set 2 preprocessed by Ludewig and Jannach [27]. Each record corresponds to one PV and contains information such as user ID, item ID, and time stamp. The data set involves 28,316,459 unique user-item pairs composed of 422,282 users and 624,221 items.
A. Methods for comparison
We compared the performance of the methods listed in Table IV. All computations were performed on an Apple MacBook Pro computer with an Intel Core i7-5557U CPU (3.10 GHz) and 16 GB of memory.
The optimization models (2)-(5) and (7)-(9) were solved using OSQP 3 [42], a numerical optimization package for solving convex quadratic optimization problems. As in Table I, daily-PV sequences were calculated for each user-item pair, where m is the maximum number of daily PVs, and n is the number of terms (i.e., past days) in the PV sequence. In this process, all PVs more than n days ago were added to the number of PVs n days ago, and the number of daily PVs of more than m was rounded down to m. Similarly, the recency and frequency combinations (r, f ) ∈ R × F were calculated using daily PVs as in Table I, where (|R|, |F |) = (n, m).
Other machine learning methods (i.e., LR, ANN, and RF) were respectively implemented using the LogisticRe-gressionCV, MLPRegressor, and RandomForestRegressor functions in scikit-learn, a Python library of machine learning tools. Related hyperparameters were tuned through the 3-fold cross-validation according to the parameter settings of benchmark study [34]. These machine learning methods employed the PV sequence (v 1 , v 2 , · · · , v n ) as n input variables for computing item-choice probabilities. Here, each input variable was standardized, and undersampling was conducted to improve prediction performance.
B. Performance evaluation methodology
There are five pairs of training and validation sets of clickstream data in the preprocessed data set [27]. As shown in Table V, each training period is 90 days, and the next day is the validation period. The first four pairs of training and validation sets were used for model estimation, and the last pair 5 was used for performance evaluation. To examine how the sample size affects prediction performance, we prepared small-sample training sets by choosing user-item pairs randomly from the original training set. Here, the sampling rates are 0,1%, 1%, and 10%, and the original training set is referred to as "fullsample." Note that the results were averaged over ten trials for the sampled training sets.
We considered the top-N selection task to evaluate prediction performance. Specifically, we focused on items that were viewed by a particular user during a training period. From them, we selected I sel , a set of top N items for the user according to estimated item-choice probabilities. Here, recently viewed ones were selected when there were two or more items that had the same choice probability. Let I view be the set of items viewed by the user in the validation period. Then, the F1 score is defined by the harmonic average of Recall := |I sel ∩ I view |/|I view | and Precision := |I sel ∩ I view |/|I sel | as F1 score := 2 · Recall · Precision Recall + Precision .
In the following sections, we examine the F1 scores that were averaged over all users. The percentage of user-item pairs leading to item-choices is only 0.16%.
C. Effects of the transitive reduction
We generated constraints in Eq. (8) based on the following three directed graphs.
Case 1(Enumeration): All edges (u, v) satisfying u ≺ v were enumerated. Case 2(Operation): Edges corresponding to the operations Up, Move, and Swap were generated as in Figs. 1(a) and 2(a). Case 3(Reduction): Transitive reduction was computed using our algorithms as in Figs. 1(b) and 2(b). Table VI gives the problem size of our PV sequence model (7)-(9) for some settings of (n, m) of PV sequence. Here, the column labeled "#Vars" shows the number of decision variables (i.e., (m+1) n ), and the subsequent columns show the number of constraints in Eq. (8) for the three cases mentioned above.
The number of constraints grew rapidly as n and m increased in the enumeration case. In contrast, the number of constraints was always kept the smallest by the transitive reduction among the three cases. When (n, m) = (5, 6) for instance, the number of constraints in the operation case was reduced to 63798/195510 ≈ 32.0% for SeqUM and 85272/144060 ≈ 59.2% for SeqUS by transitive reductions. The number of constraints was larger for SeqUM than for SeqUS in the enumeration and operation cases. In contrast, the number of constraints was often smaller for SeqUM than for SeqUS in the reduction case. This means that the transitive reduction had greater impacts on SeqUM than on SeqUS in terms of the number of constraints. Table VII gives the computation time required for solving the optimization problem (7)-(9) for some settings of (n, m) of PV sequence. Here, "OM" indicates that the computation was aborted due to out of memory. The enumeration case often caused out of memory because of a huge number of constraints; see also Table VI. The operation and reduction cases completed the computations for all the settings of (n, m) of PV sequence. Moreover, the transitive reduction made the computations faster. A notable example is SeqUM with (n, m) = (5, 6); the computation time in the reduction case (i.e., 86.02 s) was only one-tenth of that in the operation case (i.e., 906.76 s). These results demonstrate that the transitive reduction improves the efficiency both in terms of computation time and memory usage. Table VIII gives the computational performance of our optimization model (7)-(9) for some settings of (n, m) of PV sequence. Here, for each n ∈ {3, 4 . . . , 9}, the largest m was chosen such that the computation finished within 30 min. Both SeqUM and SeqUS always delivered higher F1 scores than SeqEmp did. This means that our monotonicity constraint (8) works well for improving the prediction performance. The F1 scores provided by SeqUM and SeqUS were very similar, and they were the largest with (n, m) = (7,3). In view of these results, we focus on the setting (n, m) ∈ {(7, 3), (5, 6)} in the following sections. When the full-sample training set was used, SeqUM and SeqUS always delivered better prediction performance than the other methods did. When the 1%-and 10%-sampled training sets were used, the prediction performance of SeqUS decreased slightly, whereas SeqUM still performed the best of all the methods. When the 0.1%-sampled training set was used, 2dimMono always performed better than SeqUS did, and in the case of (n, m) = (5, 6), 2dimMono showed the best prediction performance of all the methods. These results suggest that our PV sequence model performs very well especially when the sample size is sufficiently large.
The prediction performance of SeqEmp deteriorated rapidly as the sampling rate decreased, and this performance was always much worse than that of 2dimEmp. Meanwhile, SeqUM and SeqUS maintained high prediction performance even when the 0.1%-sampled training set was used. This means that the monotonicity constraint (8) in our PV sequence model is more effective than the monotonicity constraints (3)-(4) in the twodimensional monotonicity model. Fig. 4 shows the F1 scores of the machine learning methods (i.e., LR, ANN, and RF) and our PV sequence model (i.e., SeqUM) using the full-sample training set, where the number of selected items is N ∈ {3, 5, 10}, and the setting of PV sequence is (n, m) ∈ {(7, 3), (5, 6)}. Note that in this figure, SeqUM( * ) represents the optimization model (7) Since SeqEmp takes no account of the monotonicity constraint (8), item-choice probabilities estimated by SeqEmp have irregular shapes for v 3 ∈ {1, 2}. In contrast, item-choice probabilities estimated with the monotonicity constraint (8) are relatively smooth. Because of the Up operation, itemchoice probabilities estimated by SeqUM and SeqUS increase as (v 1 , v 2 ) moves from (0, 0) to (6,6). Because of the Move operation, item-choice probabilities estimated by SeqUM also increase as (v 1 , v 2 ) moves from (0, 6) to (6, 0). On the other hand, item-choice probabilities estimated by SeqUS are relatively high around (v 1 , v 2 ) = (3,3). This highlights the difference in the monotonicity constraint (8) between the two posets (Γ, UM ) and (Γ, US ). Fig. 6 shows item-choice probabilities estimated by our PV sequence model using the 10%-sampled training set, where the setting of PV sequence is (n, m) = (5,6). In this case, since the sample size was reduced, item-choice probabilities estimated by SeqEmp are highly unstable. In particular, itemchoice probabilities were estimated to be zero for all (v 1 , v 2 ) with v 1 ≥ 3 in Fig. 6(c); however, this is unreasonable from the perspective of frequency. In contrast, SeqUM and SeqUS estimated item-choice probabilities that increase monotonically with respect to (v 1 , v 2 ).
VII. CONCLUSION
This paper dealt with the shape-restricted optimization model for estimating item-choice probabilities on an ecommerce website. Our monotonicity constraint based on the tailored order relations made it possible to obtain closer estimates of item-choice probabilities for all possible PV sequences. To improve computational efficiency of our optimization model, we devised constructive algorithms for transitive reduction that removes all redundant constraints from the optimization model.
We assessed the effectiveness of our method through experiments using the real-world crickstream data. The experimental results demonstrated that the transitive reduction enhanced the efficiency both in terms of computation time and memory usage of our optimization model. In addition, our method delivered better prediction performance than did the two-dimensional monotonicity model [19] and the common machine learning methods. Our method was also helpful in correcting prediction values computed by other machine learning methods.
Our research contribution is threefold. First, we derived two types of posets by exploiting the properties of recency and frequency of user's previous PVs. These posets allow us to place appropriate monotonicity constraints on item-choice probabilities. Next, we developed algorithms for transitive reduction specialized for our posets. Our algorithms are more efficient in terms of the time complexity than general-purpose algorithms for transitive reduction. Finally, our method expands a great potential of shape-restricted regression for predicting user behavior on e-commerce websites.
Once the optimization model for estimating item-choice probabilities has been solved, the obtained results can easily be put into practical use on e-commerce websites. Accurate estimates of item-choice probabilities will be useful in customizing a sales promotion according to the needs of a particular user. In addition, our method, which can estimate user preferences from clickstream data, aids in creating a high-quality user-item rating matrix for recommender algo-rithms [20].
A future direction of study will be to develop new posets that further improve the prediction performance of our PV sequence model. Another direction of future research will be to incorporate user/item heterogeneity into our optimization model, as in the case of latent class modeling of twodimensional probability table [32].
The "only if" part: Firstly, we suppose that (u, v) ∈ E * UM . We then have v ∈ UM({u}) from Definition 4 and Lemma 2. Therefore, we consider the following two cases. Case 1: v = Up(u, s) for some s ∈ [1, n] For the sake of contradiction, we assume that s = n (i.e., s ≤ n − 1). Then there exists an index j such that s < j ≤ n. If u j > 0, then we have w = Move(u, s, j) and v = Up(w, j). If u j = 0, then we have w = Up(u, j) and v = Move(w, s, j). These results imply that u ≺ UM w ≺ UM v, which contradicts (u, v) ∈ E * UM due to condition (C2) of Lemma 2. Case 2: v = Move(u, s, t) for some (s, t) ∈ [1, n] × [1, n] We assume that t = s + 1 (i.e., t ≥ s + 2) for the sake of contradiction. Then there exists an index j such that s < j < t. If u j > 0, then we have w = Move(u, s, j) and v = Move(w, j, t). If u j = 0, then we have w = Move(u, j, t) and v = Move(w, s, j). These results imply that u ≺ UM w ≺ UM v, which contradicts (u, v) ∈ E * UM due to condition (C2) of Lemma 2.
The "if" part: Next, we show that (u, v) ∈ E * UM in the following two cases. Case 1: Condition (UM1) is fulfilled Condition (C1) of Lemma 2 is clearly satisfied. To draw the condition (C2), we consider w ∈ Γ such that u UM w UM v. From Lemma 1, we have u lex w lex v. Since u is next to v in the lexicographic order, we have w ∈ {u, v}. Case 2: Condition (UM2) is fulfilled Condition (C1) of Lemma 2 is clearly satisfied. To draw the condition (C2), we consider w ∈ Γ such that u UM w UM v. From Lemma 1, we have u lex w lex v, which implies that w j = u j for all j ∈ [1, s − 1]. Therefore, we cannot apply any operations to w j for j ∈ [1, s − 1] in the process of transforming w from u into v. To keep the value of n j=1 w j constant, we can apply only the Move operation. However, once the Move operation is applied to w j for j ∈ [s + 2, n], the resultant sequence cannot be converted into v. As a result, only Move( · , s, s + 1) can be performed. This means that w = u or w = Move(u, s, s + 1) = v.
B. Proof of Theorem 4
The "only if" part: Firstly, we suppose that (u, v) ∈ E * US . We then have v ∈ US({u}) from Definition 5 and Lemma 2. Therefore, we consider the following two cases. Case 1: v = Up(u, s) for some s ∈ [1, n] For the sake of contradiction, we assume that u j ∈ {u s , u s +1} for some j ∈ [s + 1, n]. If u j = u s , then we have w = Up(u, j) and v = Swap(w, s, j). If u j = u s + 1, then we have w = Swap(u, s, j) and v = Up(w, j). These results imply that u ≺ US w ≺ US v, which contradicts (u, v) ∈ E * US due to condition (C2) of Lemma 2. Case 2: v = Swap(u, s, t) for some (s, t) ∈ [1, n] × [1, n] For the sake of contradiction, we assume that u j ∈ [u s , u t ] for some j ∈ [s + 1, t − 1]. If u s < u j < u t , then we have w 1 = Swap(u, j, t), w 2 = Swap(w 1 , s, j), and v = Swap(w 2 , j, t). If u j = u s , then we have w = Swap(u, j, t) and v = Swap(w, s, j). If u j = u t , then we have w = Swap(u, s, j) and v = Swap(w, j, t). All these results contradict (u, v) ∈ E * US due to condition (C2) of Lemma 2.
The "if" part: Next, we show that (u, v) ∈ E * US in the following two cases. Case 1: Condition (US1) is fulfilled Condition (C1) of Lemma 2 is clearly satisfied. To draw the condition (C2), we consider w ∈ Γ such that u US w US v. From Lemma 1, we have u lex w lex v, which implies that w j = u j for all j ∈ [1, s − 1]. Therefore, we cannot apply any operations to w j for j ∈ [1, s − 1] in the process of transforming w from u into v. We must apply the Up operation only once because the value of n j=1 w j remains the same after the Swap operation. The condition (US1) guarantees that for all j ∈ [s+1, n], w j does not coincide with u s +1 even if Up( · , j) is adopted. Therefore, Swap( · , s, j) for j ∈ [s + 1, n] never lead to w s = u s + 1. As a result, Up( · , s) must be performed. Other applicable Swap operations produce a sequence that cannot be converted into v. This means that w = u or w = Up(u, s) = v. Case 2: Condition (US2) is fulfilled Condition (C1) of Lemma 2 is clearly satisfied. To draw the condition (C2), we consider w ∈ Γ such that u US w US v. From Lemma 1, we have u lex w lex v. This implies that w j = u j for all j ∈ [1, s−1], and that w s ∈ [u s , u t ]. Therefore, we cannot apply any operations to w j for j ∈ [1, s − 1] in the process of transforming w from u into v. To keep the value of n j=1 w j constant, we can apply only the Swap operation. However, once the Swap operation is applied to w j for j ∈ [t+1, n], the resultant sequence cannot be converted into v. We cannot adopt w = Swap(u, s, j) for j ∈ [s+1, t−1] due to the condition (US2). If we adopt w = Swap(u, j, t) for j ∈ [s + 1, t−1], we have w t ≤ u s −1 due to the condition (US2), so the application of Swap( · , t, j) is unavoidable for j ∈ [t + 1, n]. As a result, Swap( · , s, t) must be performed. Other applicable Swap operations produce a sequence that cannot be converted into v. This means that w = u or w = Swap(u, s, t) = v.
APPENDIX B PSEUDOCODES
A. Our constructive algorithm for (Γ, E * UM ) The nodes and directed edges of graph (Γ, E * UM ) are enumerated in the breadth-first search manner, and stored in two lists L and E, respectively. We use APPEND(L, v) which adds a vertex v to the rear of L. Similarly, we use APPEND(E, (u, v)).
A queue Q is used to store the nodes of L whose successors are under investigation (i.e., the "frontier" of L). The nodes in Q are listed in ascending order of their depth, where the depth of v is the shorted-path length from (0, 0, . . . , 0) to v. We use DEQUEUE(Q) which returns and deletes the first element of Q, and ENQUEUE(Q, v) which adds v to the rear of Q.
Our constructive algorithm for computing the transitive reduction (Γ, E * UM ) is summarized as Algorithm 1. For a node u given in line 6, we find all the nodes v with the condition (UM1) in lines 7-10, and those with the condition (UM2) in lines 11-15. The membership test for D U and D M can be done in O(1) time by their definitions. Recall that DEQUEUE, ENQUEUE, and APPEND can be done in O(1) time. The for-loop in lines 11-15 takes O(n) time. Therefore, recalling that |Γ| = (m + 1) n , we see that Algorithm 1 runs in O(n(m + 1) n ) time. L ← list consisting of (0, 0, . . . , 0) finally gives Γ 3: E ← empty list finally gives E * UM 4: Q ← queue consisting of (0, 0, . . . , 0)
25:
else if u t = u s then 26: break The difference from Algorithm 1 is on how to find nodes v with the condition (US1) or (US2). For a node u given in line 6, we find all the nodes v with the condition (US1) in lines 7-16, and those with the condition (US2) in lines 17-26. We here explain the latter part.
Let (u, v) be a directed edge added to E in line 22. Let (s,t) be such that v = Swap(u,s,t). From line 20, we have us < ut < b. Note that for each t in line 19, the value b gives the smallest value of u j with u j > us for j ∈ [s + 1, t − 1]. Also, due to lines 25-26, u j = us for j ∈ [s + 1,t − 1]. Combining these observations, we see that for j ∈ [s+1,t−1], u j < us or u j ≥ b > ut (meaning u j / ∈ [us, ut]).
Therefore, the pair (u, v) satisfies the condition (US2). It is easy to verify that this process finds all the vertices v satisfying the condition (US2). Since both of the double for-loops in lines 7-16 and 17-26 take O(n 2 ) time, Algorithm 2 runs in O(n 2 (m + 1) n ) time. | 2020-04-21T01:01:28.636Z | 2020-04-18T00:00:00.000 | {
"year": 2020,
"sha1": "9c6d055a04dd66655db8c3872efadd7aad5e20c9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9c6d055a04dd66655db8c3872efadd7aad5e20c9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
115158179 | pes2o/s2orc | v3-fos-license | Gersten's conjecture
The purpose of this article is to prove that Gersten's conjecture for a commutative regular local ring is true. As its applications, we will prove the vanishing conjecture for certain Chow groups, generator conjecture for certain $K$-groups and Bloch's formula for absolute case.
Introduction
In this paper we show Gersten's conjecture [Ger73] for unramified case. For any commutative noetherian ring A with 1 and any natural number 0 ≤ p ≤ dim A, let M p A denote the category of finitely generated A-modules M whose support has codimension ≥ p in Spec A. Here is a statement of Gersten's conjecture: For any commutative regular local ring A and natural number 1 ≤ p ≤ dim A, the canonical inclusion induces the zero map on K-theory denotes the K-theory of the abelian category M i A . We will prove this conjecture for any commutative regular local ring A which is smooth over a commutative discrete valuation ring. (See Corollary 2.2.10.) We will also show the conjecture for any commutative regular local ring A and p = dim A. (See Corollary 2.2.9.) A main key ingredient of the proofs is the notion of Koszul cubes (see §1) which is introdued and studied in [Moc13a] and [Moc13b].
Koszul cubes
In this section, we recall the notion of Koszul cubes from [Moc13a] and [Moc13b] and study them further. In particular, we introduce simple Koszul cubes which play important roles in the proof of main theorem.
Multi semi-direct products of exact categories
In this subsection, we recall notions and fundamental properties of multi semi-direct products of exact categories from [Moc13a] and [Moc13b]. Let S be a set. We start by reviewing the notion of S-cubes.
1.1.1 (Cubes). For a set S, an S-cube in a category C is a contravariant functor from P(S) to C . We denote the category of S-cubes in a category C by Cub S C where morphisms between cubes are just natural transformations. Let x be an S-cube in C . For any T ∈ P(S), we denote x(T ) by x T and call it a vertex of x (at T ). For k ∈ T , we also write d x,k T or shortly d k T for x(T {k} ֒→ T ) and call it a (k−)boundary morphism of x (at T ). An S-cube x is monic if for any pair of subsets U ⊂ T in S, x(U ⊂ V ) is a monomorphism.
(Restriction of cubes).
Let U and V be a pair of disjoint subsets of S. We define i V U : P(U) → P(S) to be the functor which sends an object A in P(U) to the disjoint union set A ∪V of A and V . Composition with i V U induces the natural transformation (i V U ) * : Cub S → Cub U . For any S-cube x in a category C , we write x| V U for (i V U ) * x and it is called restriction of x (to U along V ).
In the rest of this section, we assume that S is a finite set.
(Typical Koszul cubes). Definition. Let
A be a commutative ring with 1, f S = { f s } s∈S a family of elements in A indexed by a non-empty set S and r ≥ 0 and r ≥ n s ≥ 0 integers for each s in S. We set n S := {n s } s∈S . We define Typ A (f S ; r, n S ) to be an S-cube of finitely generated free A-modules by setting for each element s in S and subsets U ⊂ S and V ⊂ S {s}, Typ A (f S ; r, n S ) U := A ⊕r and In particular, if r = n s = 1 for any s in S, then we write In the rest of this subsection, let A be an abelian category.
(Admissible cubes).
Fix an S-cube x in an abelian category A . For any element k in S, the k-direction 0-th homology of x is an S {k}-cube H k 0 (x) in A and defined by H k 0 (x) T := Coker d k T ∪{k} . For any T ∈ P(S) and k ∈ S T , we denote the canonical projection morphism x T → H k 0 (x) T by π k,x T or simply π k T . When #S = 1, we say that x is admissible if x is monic, namely if its unique boundary morphism is a monomorphism. For #S > 1, we define the notion of an admissible cube inductively by saying that x is admissible if x is monic and if for every k in S, H k 0 (x) is admissible. If x is admissible, then for any distinct elements i 1 , . . . , i k in S and for any automorphism σ of the set {i 1 , . . . , i k }, the identity morphism on x induces an isomorphism: where σ is a bijection on S. (cf. [Moc13a,3.11]). For an admissible S-cube x and a subset T = is an S Tcube for any T ∈ P(S). Then we have the isomorphisms See [Moc13a,3.13].
In the rest of this section, let U and V be a pair of disjoint subsets of S. consisting of those S-cubes x such that x is admissible and each vertex of H T 0 (x) is in F T for any T ∈ P(S). If S is a singleton (namely #S = 1), then we write F S ⋉ F / 0 for ⋉ F. For any s ∈ S, we can regard S-cubes as S {s}-cubes of {s}-cubes. Namely by Lemma 1.1.6 below, we have the following equation for any s ∈ S.
For any element u in U, by Lemma 1.1.6 again, we also have the equality 1.1.6. Lemma. Let x be an S-cube in A and X and Y a pair of disjoint subset of S. We define x| ? X to be an S X-cubes of X-cubes by sending each subset T of S X to x| T X . For each element k ∈ S X and any subset T ⊂ S (X ⊔ {k}), the boundary morphism d for any subset W ⊂ X. Then (1) We have the equality of S (X ⊔Y )-cubes (2) Moreover assume that x is admissible, then be a family of full subcategories of A . Then we have the following equality Proof.
(1) By induction on the cardinality of Y , we shall assume that Y is the singleton Y = {y}. Then for any subset T ⊂ X and W ⊂ S (X ⊔ {y}), we have the equalities for any element k ∈ S (X ⊔ {y} ⊔W). Hence we obtain the result.
(2) We proceed by induction on the cardinality of S. We only give a proof for (i). The proof for (ii) is similar. For any element k ∈ X and any subset W ⊂ X {k}, the equality (8) shows that d is a monomorphism. For any element y ∈ X, the equality (7) shows that H Next we assume that x is in ⋉ is a monomorphism by assumption. For any element y in S, we will prove that H y 0 (x) is an admissible S {y}-cube. We proceed by induction on the cardinality of S. First we assume that y is not in X. Then by hypothesis of x, Next we assume that y is in X. Then for any subset T ⊂ S X, By replacing X with X {y}, we shall assume that y is not in X and it comes down to a question of the first case. Hence we complete the proof.
(Exact categories).
Basically, for exact category, we follows the notations in [Qui73]. Recall that a functor between exact categories f : For an exact category E , we say that its full subcategory F is an exact subcategory if it is an exact category and the inclusion functor F ֒→ E is exact and say that F is a strict exact subcategory if it is an exact subcategory and moreover the inclusion functor reflects exactness. We say that F is an extension closed (full) subcategory of E or closed under extensions in E if for any admissible exact sequence x y ։ z in E , x and z are isomorphic to objects in F respectively, then y is isomorphic to an object in F .
(Exact family).
Let F = {F T } T ∈P(S) be a family of strict exact subcategories of an abelian category A . We say that F is an exact family (of A ) if for any disjoint pair of subsets P and Q of S, ⋉ F | Q P is a strict exact subcategory of Cub P A . If F T is closed under either extensions or taking sub-and quotient objects and direct sums in A , then F is an exact family. (cf. [Moc13a, 3.20]).
(Restriction of cubes).
Let F = {F T } T ∈P(S) be an exact family of A . For any pair of disjoint subsets U and V of S, we define res V U,F : ⋉ F → ⋉ F | V U to be a functor by sending an object x in ⋉ F to By Lemma 1.1.6 and Corollary 3.14 in [Moc13a], this functor is well-defined and exact. We call this functor the restriction functor of ⋉ F to U along V . For any non-empty subset W of S, we set
Structure of simple Koszul cubes
In this subsection, we fix S a non-empty finite set and A a noetherian commutative ring with 1. We start by reviewing the notion A-sequences.
For any subset T , we denote the family { f t } t∈T by f T . We write f T A for the ideal of A generated by the family f T .
1.2.2.
We denote the category of finitely generated A-modules by M A . Let the letter p be a natural number or ∞ and I be an ideal of A.
Since the category is closed under extensions in M A , it can be considered to be an exact category in the natural way. Notice that if I is the zero ideal of A, then M I A (0) is just the category of finitely generated projective A-modules P A .
where ? = / 0 or red. For any subset Y of V , we have the equality
(Simple Koszul cubes). Definition.
Let X be a subset of S, W a subset of S X and W = U ⊔V be a disjoint decomposition of W and let the letter p be a natural number or ∞ such that P ≥ #(U ⊔ X).
and any object of Kos To examine the structure of simple Koszul cubes, we sometimes suppose the following assumptions. In particular, the inclusion functor Kos A,simp is an equivalence of categories. Proof. By Assumption 1.2.9, x / 0 is a finitely generated free A-module. We set its rank by r and fix an isomorphism of A-modules Θ / 0 : x ∼ → A ⊕r . We shall assume that S is the set [n − 1] = {0, 1, · · · , n − 1} for some positive integer n. → (A/ f s A) ⊕n U,s for some integer r ≥ n U,s ≥ 0 such that the following diagram makes commutative: the following conditions are equivalent.
(2) For some element s in S, H s 0 (a) is an isomorphism. Proof. Obviously condition (1) (resp. (3), (2)) implies condition (3) (resp. (2), (4)). First, we assume condition (2) and will prove condition (1). For any subset of U of S {s}, we will prove that a U⊔{s} and a U are isomorphisms. By replacing x with x| U {s} , we shall assume that S is a singleton S = {s} and U is the empty set. In the commutative diagram by Lemma 1.2.13 below, a / 0 is an isomorphism and then a {s} is also by applying five lemma to the diagram above. Hence we obtain the result. Next we prove that condition (4) implies condition (1). We proceed by induction on the cardinality of S. If S is a singleton, assertion follows from the first paragraph. Assume that #S > 1 and let us fix an element s of S. Then by inductive hypothesis, it turns out that the endomorphism H s 0 a of ⊕m is an isomorphism. Then by virtue of the first paragraph again, a is an isomorphism.
1.2.13. Lemma. Let I be an ideal of A which is contained in the Jacobson radical of A and X an m-th matrix whose coefficients are in A. If X mod I ia an invertible matrix, then X is also invertible.
Proof. By taking the determinant of X, we shall assume that m = 1. Then assertion follows from Nakayama's lemma.
1.2.14. Definition. Let x be a simple Koszul cube associated with f S which is isomorphic to Typ A (f S ; r, {n t } t∈S ) for some integers r ≥ 0 and r ≥ n s ≥ 0 for each t in S. Let s be an element of S. We say that x is non-degenerate along s if n s = r and x is degenerate along s if n s = 0.
We can similarly prove the following variant of Lemma 1.2.12.
Lemma.
We suppose Assumption 1.2.10. Let x be a simple Koszul cube associated with f S which is isomorphic to Typ A (f S ; r, {n t } t∈S ) for some integers r ≥ 0 and r ≥ n s ≥ 0 for each t in S. We assume that x is non-degenerate along s for some element s of S. Then for an endomorphism f of x, the following conditions are equivalent: (1) f is an isomorphism.
Let y be a typical Koszul cube of type (r ′ , {n ′ t } t∈S ) associated with f S for some integers r ′ ≥ 0 and r ≥ n ′ t ≥ 0. Then we can denote a morphism of S-cubes of A-modules ϕ : x → y by Proof. For ϕ n→n , assertion follows from Lemma 1.2.15 and for ϕ d→d , we apply the same lemma to UD s (ϕ).
Lemma. Let
Typ be a sequence of fundamental typical Koszul cubes such that β α = 0. If the induced sequence of A/ f S -modules is exact, then the sequence (19) is also (split) exact.
Proof. Since the sequence (20) is an exact sequence of projective A/ f S -modules, it is a split exact sequence and hence m = l + n and there exists a homomorphism of A/ f S -modules such that H S 0 (β )γ = id H S 0 (Typ A (f S ) ⊕n ) . Then by Lemma 1.2.16, there is a morphism of S-cubes of Amodules γ : Typ A (f S ) ⊕n → Typ A (f S ) ⊕m such that H S 0 (γ) = γ. Since β γ is an isomorphism by Lemma 1.2.12, by replacing γ with γ(β γ) −1 , we shall assume that β γ = id Typ A (f S ) ⊕n . Therefore there is a commutave diagram such that the bottom line is exact. Here the dotted arrow δ is induced from the universality of Ker β . By applying the functor H S 0 to the diagram above and by the five lemma, it turns out that H S 0 (δ ) is an isomorphism of A/ f S -modules and hence δ is also an isomorphism by Lemma 1.2.12. We complete the proof.
K-theory of Koszul cubes
In this section, we study K-theory of Koszul cubes. Although we will avoid making statements more general, several results in this section can be easily generalize to any fine localizing theories on the category of consistent relative exact categories in the sense of [Moc13b,§7]. We denote the connective K-theory by K(−) and the non-connective K-theory by K(−).
K-theory of simple Koszul cubes
In this subsection, let A be a noetherian commutative ring with 1 and f S = { f s } s∈S an A-sequence indexed by a non-empty set S. Moreover let X be a subset of S, W a subset of S X and W = U ⊔ V be a disjoint decomposition of W , Y a subset of V and let the letter p be a natural number with p ≥ #(U ⊔ X). Recall the definition of res W,F from 1.1.9 and the notions M A,? (f U ; f V )(p) and | 2007-04-18T06:38:44.000Z | 2007-04-18T00:00:00.000 | {
"year": 2007,
"sha1": "76ccc33e9576fab2fea4519ad55c9c263426ae6f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eaa1f3aaf9a7c3517f01aefa428bd11497793bbe",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
270644428 | pes2o/s2orc | v3-fos-license | Variable species establishment in response to microhabitat indicates different likelihoods of climate-driven range shifts
Climate change is causing geographic range shifts globally, and understanding the factors that influence species’ range expansions is crucial for predicting future changes in biodiversity. A common, yet untested, assumption in forecasting approaches is that species will shift beyond current range edges into new habitats as they become macroclimatically suitable, even though microhabitat variability could have overriding effects on local population dynamics. We aim to better understand the role of microhabitat in range shifts through its impacts on establishment by i) examining microhabitat variability along large macroclimatic gradients, ii) testing which of these microhabitat variables explain plant recruitment and seedling survival, and iii) predicting microhabitat suitability beyond species range limits. We transplanted seeds of 25 common tree, shrub, forb, and graminoid species across and beyond their current elevational ranges in the Washington Cascade Range, USA, along a large elevational gradient spanning a broad range of macroclimates. Over five years, we recorded recruitment, survival, and microhabitat characteristics rarely measured in biogeographic studies. We asked whether microhabitat variables correlate with elevation, which variables drive species establishment, and whether microhabitat variables important for establishment are already suitable beyond leading range limits. We found that only 30% of microhabitat parameters covaried in the expected way with elevation. We further observed extremely low recruitment and moderate seedling survival in our study system, and these were generally only weakly explained by microhabitat. Moreover, species and life stages responded in contrasting ways to soil biota, soil moisture, temperature, and snow duration. Microhabitat suitability predictions suggest that distribution shifts are likely to be species-specific, as different species have different suitabilities, and availabilities, of microhabitat beyond their present ranges, thus calling into question large-scale macroclimatic projections that will miss such complexities. We encourage further research on species responses to microhabitat and the inclusion of microhabitat in range shift forecasts.
Introduction
The most pressing environmental issues of our times are understanding the ecological effects of ongoing climate change and predicting the ensuing implications for maintaining biodiversity (Lovejoy & Hannah, 2019).Foundational to these issues are an understanding of species range limits, which are the geographic limits to species' spatial distributions.Projection inaccuracy of how species ranges will shift with climate change is alarming (Urban, 2019), and it is unclear whether our inability to project species range shifts comes from our poor knowledge of which climatic variables are important for which species or from the many other factors that can influence range shifts (e.g.microhabitat variation, dispersal limitation, species interactions, and many others).Even explaining the mechanisms underlying observed shifts is difficult, and it is clear that responses are highly species-specific (Freeman et al., 2018;Rumpf et al., 2018).
To address this issue, ecologists are urgently seeking to understand species niche limits in order to project species distribution shifts with climate change.These are commonly predicted with correlative Species Distribution Models (SDMs; Elith et al., 2010;Pacifici et al., 2015), which correlate species occurrences to current macro-climate in order to predict the relative probability of occurrence in space and time (Wiens et al., 2009).Such models have become an integral part of biogeographical research and progress has been made to address some of their shortcomings (Franklin, 2023).While these models implicitly assume that they include sufficient niche variables to accurately describe a species distribution, few models actually test what niche components are necessary to quantify population growth rate and shape species distributions (Pulliam, 2000).
A main driver of population growth rate is the fine-scale microhabitat that mediates the success of individual plants through abiotic (e.g., canopy cover, above-ground temperature, soil moisture) and biotic (e.g., soil biota, soil nutrients) conditions (Kephart & Paladino, 1997;Zurbriggen et al., 2013;Oldfather & Ackerly, 2019;Tanner et al., 2021;Sanczuk et al., 2023;Allsup et al., 2023;Kemppinen et al., 2023).However, studies that quantify population response to microhabitat variation along species ranges are rare, impeding an understanding of which aspects of fine-scale microhabitat are important in shaping species broad-scale distributions (but see Tourville et al., 2022).Despite this, recent work has still shown that incorporating microhabitat into SDMs can provide substantially improved predictions (Maclean & Early, 2023).
How microhabitat and microclimate themselves vary along large elevational, and therefore macroclimatic, gradients that characterize species ranges is poorly understood, and research is increasingly finding that microclimate is often decoupled from macroclimate (Scherrer & Körner, 2010;Ford et al., 2013;Lembrechts, 2023).It is, however, now widely accepted that microclimate differs from macroclimate due to factors such as forest canopy cover (De Frenne et al., 2019;Haesen et al., 2021) and topography (Lawson et al., 2014).This can modify species response to habitat use (Lawson et al., 2014) and create climate change refugia (Dobrowski, 2011;Pradhan et al., 2023) that mediate species response to climate change (Morelli et al., 2020).Very few studies have been able to document how variable plant-relevant environmental drivers of performance are across macroscale gradients, although substantial progress is underway in including microclimate as part of biogeographical research (Kemppinen et al., 2023).
Testing how sensitive species' life stages are to variations in microhabitat can address many of the sources of bias that distribution models face and accounting for demographic processes can greatly improve our understanding of range dynamics (Fig. 1; Normand et al., 2014;Copenhaver-Parry et al., 2020).For plant species, how early life stages (i.e.recruitment and seedling survival; henceforth establishment) respond to microhabitat may be the key to determining species' ability to expand their ranges (Kroiss & HilleRisLambers, 2015), as establishment is essential for a range expansion to occur.Understanding how early life stages respond to variations in microhabitat is thus critical to understanding microhabitat suitability for range shifts and population persistence, especially since life stages can respond differently to environmental conditions (Goodwin & Brown, 2023).Yet, few studies test how microhabitat influences establishment, and important microhabitat variables, such as soil moisture and temperature, are often left out of such work (Schurr et al., 2012).
To comprehensively understand how microhabitat, and other factors, influence species distributions, transplant experiments beyond species' ranges have been suggested as a promising approach (Lee-Yaw et al., 2016;Morris & Ehrlén, 2015).Transplant experiments show either congruence with SDM predictions (Sanczuk et al., 2022) or markedly different responses of populations than predicted by SDMs (Greiser et al., 2020), highlighting the importance of these experiments.Such experiments are particularly valuable at the leading edge of a species range (i.e.edge expanding with climate change), where novel species interactions are likely to be found (Thuiller et al., 2008).Transplant experiments often find that species are establishment, not dispersal, limited, and this can be due to unfavorable microsite conditions (Clark et al., 2007;Davis & Gedalof, 2018).How microhabitat suitability is distributed beyond species current leading edge therefore likely determines species' ability to shift their ranges (Tourville et al., 2022), yet few studies quantify microhabitat along and above species distributions.We utilized the large elevational macroclimatic gradients of the West and East sides of the Washington Cascade Range, USA for a seed transplant experiment encompassing common grasses, forbs, shrubs, and trees to ask: 1. How does microhabitat variation within sites compare to among-site macroclimate variation, and does microhabitat covary expectedly along elevational gradients?2. Does microhabitat explain establishment of common species in our system, and if so, are responses consistent among species?3.If microhabitat variables explain establishment, does the distribution of microhabitats along macroclimatic gradients suggest range shifts will be facilitated, constrained, or unaffected by the availability of suitable microhabitat?
Study site
We utilized the large climatic gradients of the Cascade Range in Washington, USA to transplant seeds along each of 2 large transects spanning ~1200 m on the West and East side of the Cascade Crest on the traditional lands of the Nlaka'pamux, Nooksack, Okanagan, and Methow peoples.Both transects are characterized by topographically complex terrain and differ substantially in their macroclimatic characteristics.The West transect (Mount Baker National Forest) is warmer and wetter than the East transect (Okanagan National Forest) (Supporting Information).Both transects are characterized by montane to subalpine species common in the Pacific Northwest, with an abundance of cedars, firs, heathers, and understory forbs and grasses.We selected 15 sites along each of our 2 transects (n = 30 sites) using satellite images of accessible areas, and identifying areas (i.e.blocks) that had both low and high tree canopy openness.In the field, we established two blocks per site (n = 60 blocks) to encompass different levels of canopy openness, with one block in the relatively most open area and the other block in the relatively most closed canopy.
Species data
We sowed 25 species encompassing tree (Abies grandis, A. lasiocarpa, Picea sitchensis, P. engelmannii, Pinus ponderosa, P. contorta ), shrub (Mahonia nervosa, M. aquifolium, Rubus ursinus, R. spectabilis, Sambucus cerulea, S. racemosa, Sorbus sitchensis, Vaccinium parvifolium, V. deliciosum ), forb (Eriophyllum lanatum, Anemone occidentalis, Erigeron perigrinus, Lupinus latifolius, Maianthemum dilatum, M. racemosum, Tolmiea menziesii, Tellima grandiflora ), and graminoid (Carex stipata, Carex spectabilis ) growth forms.We chose congeneric and confamilial pairs of species that have different regional distributional characteristics (e.g.lower vs. higher elevation, or wetter vs. drier side of the Cascade Crest), thus aiming to capture a range of species potential responses.To facilitate field identification of seedlings, we sowed these pairs onto separate quadrats within each block for three plots (n = 180 plots) of three 0.25 x 0.25 m quadrat replicates (n = 540 replicates), leaving the third quadrat unmanipulated to control for background recruitment.We also included unpaired species with large seeds (L.latifolius ) or with high regional prevalence (S. sitchensis, A. occidentalis ).We opportunistically sourced seeds from nearby areas in 2016 and purchased native seeds for those species for which we had no, or not enough, locally sourced seeds.
We homogenized all seed sources for a given species and sowed seeds in a sand mixture in September-October 2017.We recorded recruitment (i.e.survived to end of first growing season) and yearly survival of seedlings during the growing season (May-September) in 2018, 2019, and 2020, and recorded only surviving seedlings in 2022.We surveyed sites three times during the growing season in 2018 and 2019, and visited sites once at the end of the growing season in 2020 and 2022.We were not able to access 10 of our sites in 2018 (wildfire closures) or any of our field sites in 2021 (closed USA border).
Microhabitat data
We measured a suite of abiotic and biotic parameters to quantify microhabitat (Table S1) and categorize these parameters as being directly influenced by climate change or not.We measured some of these variables annually throughout the first three years of the study period and others in just the last year of the study (due to logistical and financial constraints).Here, we assume that the rank order of microhabitat differences among sites remained constant across years.
We measured aspects of microhabitat directly related to climate by recording soil and air temperature, duration of snow cover, and soil moisture at each block with two different data loggers (TMS-4 data logger, TOMST, Prague Czech Republic; Wild et al., 2019; HOBO 64K Pendant Temperature/Alarm data loggers, Onset, Bourne, Massachusetts USA).We calculated seasonal variables from these data loggers, using some functions from package 'myClim' (Man et al., 2023;Man, Kalčík, Macek, Wild, et al., 2023) to generate the following biologically meaningful microclimate explanatory variables: summer maximum soil temperature, winter minimum soil temperature, spring days of snow coverage, summer minimum soil moisture, and summer minimum plant-height temperature (Table S1, Supporting Information).
We also measured aspects of microhabitat not directly related to climate by quantifying abiotic and biotic soil microhabitat, namely soil fungal, bacterial, organic carbon and nitrogen content, and water holding capacity at each replicate to capture variability within each site.We also measured canopy openness at each block.Upon individual site snow melt-out in 2022, we took soil cores to conduct in situmeasurements of fungus:bacteria ratio (F:B) with a microBIOMETER® Test Kit (Prolific Earth Sciences, Montgomery, New York, USA).We then quantified soil organic carbon to nitrogen ratio (C:N) and water holding capacity in the lab (Supporting Information).
Statistical analyses: microhabitat variation
To answer how variation within sites compares to among sites of each of our microhabitat parameters, we calculated the variance partition coefficient from intercept-only linear mixed models (LMMs; package 'lmerTest'; Kuznetsova et al., 2017) that included block nested within site as a random effect (microhabitat variable ~1 + (1|site/block)).In cases where LMMs did not converge, we changed the random effect to include only site.To test if microhabitat parameters covary with elevation, we fit LMMs with quadratic elevation, transect, and their interaction and included site as a random effect (microhabitat variable ~poly(elevation, 2) * transect + (1|site)).We also used a principal component analysis plot to identify any clustering in the microhabitat data.We conducted all data processing and analyses in R version 4.2.3 (R Core Team, 2023).
Statistical analyses: species establishment
To interpret our results in terms of likelihood of recruitment, recruit numbers, and seedling survival, we first transformed our species-level yearly recruitment and seedling survival data to binary or proportional responses by calculating: binary recruitment, relative recruit counts in the first three years (2018,2019,2020), 1-year (2018-2019 or 2019-2020), 2-year (2018-2020), and multi-year (survived to 2022) survival of seedlings.
We controlled for background recruitment by subtracting the recruit counts in control plots from paired seed addition plots.From this, we then calculated relative recruit counts: recruit counts species i, site-rep j / maximum recruit counts species i .We calculated the proportion of surviving seedlings in all replicates, including control replicates, using all non-zero recruitment data and not allowing survival probability > 1.
While we have general a priori hypotheses of how certain microhabitat conditions affect species establishment (Table S1), previous studies have shown that recruitment and seedling survival vary greatly by species and we have limited prior knowledge of which microhabitat parameters are important for each species.We thus chose a data exploratory framework and used an information-theoretic approach with model averaging ('MuMIn' package, Bartoń, 2022) to compare ecologically meaningful microhabitat variables to identify the most suitable ones for each species (Tredennick et al., 2021).We fit separate binomial generalized linear models (GLMs) for each species' life stage to determine which of the microhabitat variables described above (uncorrelated with Pearson's correlation coefficient < 0.7) are important for establishment (i.e.recruitment and seedling survival).We only analyzed species for which we observed a response in at least 8 plots for any given life stage.
Each of our global GLMs included all 8 microhabitat variables described above, plus any quadratic effects identified as important by AICc in a reduced model ('AICcmodavg' package; Mazerolle, 2020; Supporting Information).To account for soil temperature effects on recruitment and plant-height temperature on seedling survival, we fit recruitment models with soil temperature and seedling survival models with plant-height temperature.To account for microhabitat variation not captured by any of these parameters, we further included canopy openness and transect as fixed effects (life stage success ~microhabitat variables + canopy openness + transect, family = binomial(link = 'logit')).In 1-year survival models, we also included year as a fixed effect to account for differences among years in overall seedling survival and different frequencies of site visits (life stage success ~microhabitat variables + canopy openness + transect + year, family = binomial(link = 'logit')).
We removed variables with a variance inflation factor > 5 ('car' package; Fox et al., 2023) to reduce parameter collinearity, and restricted the models used in model selection to have N/10 maximum parameters to avoid fitting overly complex models.Where there were multiple GLMs within 2 AICc points of the best model, we calculated the full model averaged coefficients ('MuMIn' package; Bartoń, 2022), and otherwise we selected the model with the lowest AICc as the best model.Because of the link function in our GLMs, our results can be interpreted as increasing or decreasing the log odds of recruitment or seedling survival, corresponding to lower or higher likelihoods, respectively.
Statistical analyses: microhabitat suitability
To answer how microhabitat suitability changes at and beyond species' range edges, we used the model results from the species establishment analyses above to predict likelihood of recruitment, relative recruit counts, and 1-year as well as 2-year seedling survival at each plot (predict(model, type = 'response')) ('Mu-MIn' package; Bartoń, 2022).We considered these predicted values as proxies for microhabitat suitability, with increasing suitability at and beyond range edge indicating greater likelihood of range expansion and decreasing suitability indicating a lower likelihood for range expansion.For these analyses, we used only the species where sites extend beyond their thermal range limit (Fig. S2) and with an observed response in at least 8 plots.
Microhabitat variability
Microhabitat variation within sites was higher than among site variation for half of the microhabitat parameters for which we could fit nested LMMs.Out of these 9 microhabitat parameters that we measured, only 3 varied predictably with elevation (canopy openness, winter minimum soil temperature, and spring days of snow).The parameters that followed an elevational pattern are both expected to be directly (winter minimum temperature, spring days of snow) as well as indirectly (canopy openness) affected by climate (Fig. 2).
Our principal component analysis plot shows that the microhabitat variables we measured cluster broadly by soil characteristics (carbon:nitrogen, fungus:bacteria, water holding capacity), soil moisture (summer minimum soil moisture, spring days of snow), light availability (canopy openness), soil temperature (summer maximum soil temperature), and above-ground temperature (summer minimum plant-height temperature) (Fig. S3).An additional parameter describing soil temperature (winter minimum soil temperature) clusters in the middle of the microhabitat space.
Species establishment
Out of the 25 species that we sowed, 84% (21/25) recruited in at least 1 plot and 56% (14/25) recruited in 8 plots or more (Table S2).Of the species that recruited, seedlings (i.e. at least one seedling) survived for one year for 57% (8/14) of species and survived for two years for 43% (6/14) of species.Almost all species that had sites beyond their range edge recruited and survived beyond their range edge, a pattern seen for species with either high-or low-elevation range edges (Table S2).Almost all of our models met the GLM assumption of independence of residuals.However, many of our models did not meet homogeneity of variance, variance did not equal the mean, or the link function was sometimes inappropriate (> 0.5 difference from 1 for slope of link function).Together with the low proportion of variance (Table S3) from each set of candidate models, we are therefore cautious in interpreting our results.
Overall, we found microhabitat parameters both related and not related to climate identified as the most important parameters in model selection of the effects of microhabitat on establishment, but models had low explanatory power (Tables 1, S3).At the community level, the main patterns we found were that certain microhabitat variables had largely negative or positive effects, but the direction of these effects changed with different life stages (Fig. 3).At the species level, we found that some species only had few microhabitat parameters chosen in model selection, whereas others had many (Fig. S4).Within the same species, parameters usually had the same directionality of effect on likelihood of recruitment, recruit counts, and seedling survival except for T. grandiflora.For two species, A. lasiocarpa andE.lanatum , the directionality of effects was consistent across life stages.
Microhabitat suitability
We generated predictions of microhabitat suitability to better understand if the distribution of microhabitats along climatic gradients suggests range shifts will likely be facilitated (i.e.where suitability increases with elevation), constrained (i.e.decreases), or unaffected (i.e. is constant).We note that due to poor model fit in the original models, our predictions cannot be used to make definitive forecasts of which species are likely to shift their ranges with climate change, but rather use these predictions to assess how microhabitat suitability changes across a large elevational gradient.Our predictions indicate that only range shifts for L. latifolius are likely to be facilitated with increasing microhabitat suitability with elevation (Fig. 4), although our experimental sites extended only just beyond the range limit for the species.For all other species, microhabitat suitability either declines or has no pattern with elevation and thus range shifts are likely to either be constrained or unaffected, respectively (Figs.S5, S6, S7).We also found that suitability patterns were modified by transect and life stage.
Together with these suitability predictions, the microhabitat parameters that significantly vary with elevation can give further insight on what aspects of microhabitat can facilitate or constrain range shifts.For example, the increasing microhabitat suitability for L. latifolius (Fig. 4a) together with the positive effects of canopy openness on the species (Table 1) and canopy openness increasing with elevation (Fig. 2a) point to a likely facilitated range expansion for the species.Spring snow cover shows the same pattern, and thus is a further microhabitat variable that will likely facilitate range expansion for this species.However, winter minimum soil temperature decreases with elevation in the Okanagan transect but positively affectsL.latifolius recruit counts and seedling survival, so this microhabitat variable could constrain range expansion.
Discussion
While many species ranges are on average shifting in the direction predicted by SDMs, there is large variation among species that is difficult to explain.Microhabitat variability is increasingly recognized as influencing population dynamics across species ranges (Oldfather & Ackerly, 2019) and may be a key factor in understanding and predicting range shifts (Lembrechts, Nijs, et al., 2019;Maclean & Early, 2023;Stickley & Fraterrigo, 2023).However, how microhabitat suitability, and variability therein, is distributed along species ranges is vastly understudied, impeding an understanding of how microhabitat can facilitate or constrain range shifts.We found that microhabitat is highly variable across species ranges and is largely decoupled from the macro-scale at which SDMs are typically constructed.We further found variable, albeit weak, effects of microhabitat for different species and life stages, suggesting that the drivers of establishment are complex and difficult to detect.Our microhabitat suitability predictions show microhabitat will either constrain range expansions, or have no effect, for most species, with the range shifts of just one species (L.latifolius ) likely facilitated by microhabitat.
Complex microhabitat patterns across elevation gradients
Surprisingly, most of our microhabitat variables, even ones related to climate (summer soil moisture, summer soil temperature, summer plant height temperature) did not follow an elevational pattern representative of the macroclimate (Fig. S8) commonly used in SDMs.This is consistent with many studies finding that microhabitat is often decoupled from elevation and regional climate (Ford et al., 2013;Lembrechts, Lenoir, et al., 2019).We further found that none of the soil composition parameters we measured (fungus:bacteria, carbon:nitrogen, water holding capacity) followed an elevational pattern, even though other alpine studies have found soil microbial community (Hiiesalu et al., 2023) and soil carbon:nitrogen (Weintraub et al., 2016) to covary with elevation.
We further found that only canopy openness, winter minimum soil temperature, and spring snow days covary with elevation.Despite higher elevation areas characterized by more open forest canopies, the high variability at each site (Fig. 2) is in line with the large body of work showing that forest canopies can provide climate refugia (De Frenne et al., 2019;Haesen et al., 2021).As species light requirements can be variable, light availability and canopy cover can shape species ranges (Muñoz Mazon et al., 2023) and therefore can be an important component of range shifts (Tourville et al., 2022).
Low and variable effects of microhabitat on establishment
Our results agree with other studies, which find overall low recruitment in seed addition sites (Clark et al., 2007) and decreasing fitness of transplants beyond the range (Lee-Yaw et al., 2016;Stanton-Geddes et al., 2012).Plant establishment beyond current range edges is necessary for plant species to shift their ranges upward, however this is complicated by plants usually being recruitment limited.This means that they are limited by some aspect of their environment (e.g., microhabitat unsuitability, seed predation) and not by dispersal (Clark et al., 2007).While we did not control for seed predation, herbivory, or fungal infections, we captured mortality with our methods.Even though seed predation rates decline in higher elevation areas (Hargreaves et al., 2019), we still removed fleshy fruits around seeds to deter predation.We also did not collect data to test for effects of community composition on establishment, but with low recruitment rates, plants in our system would still be considered recruitment limited.
We did not find that establishment responses to any particular microhabitat were restricted to certain growth forms, with species-specific relationships to microhabitat.This is not surprising, as species have unique environmental requirements (Table S1) and species characteristics even impact predictive power in SDMs (Guisan et al., 2007).We also did not find differences in effects of the microhabitat aspects that are expected to be directly versus indirectly affected by climate change.Interestingly, we found that some parameters, such as winter soil temperature, increased likelihood of recruitment but these effects switched to decreasing likelihood of seedling survival.One exception was spring days with snow, which generally had positive effects, and this is similar to findings by Davis & Gedalof (2018), who found positive effects of winter snow cover on recruitment.The importance of microbial community in plant dynamics is also increasingly acknowledged (Castro et al., 2022), although soil fungus:bacteria ratio was not selected more than other variables in our system.Microbial communities could even be an important factor in determining species range shifts, such as favorable soil microbial composition mediating climate tolerance to promote tree seedling survival (Allsup et al., 2023).
Predicting microhabitat suitability
We found that microhabitat suitability beyond the leading range edge was species-and life stage-specific, with most species showing either decreased suitability or no pattern with elevation.We posit that the role that microhabitat may play in facilitating or constraining range shifts for any given species is closely tied to how microhabitat itself varies with elevation.For example, if a favorable microhabitat parameter for establishment decreases with elevation, establishment will overall not be favored and a range shift could be constrained.We found this in our system, with increasing likelihood of L. latifoliusrecruit counts and seedling survival with warmer winter minimum soil temperatures, but soil temperature decreases with elevation.If a favorable microhabitat parameter increases with elevation, however, this could lead to a facilitated range shift.This is the case for canopy openness, which increases with elevation and also increases likelihood of L. latifolius establishment.Since microhabitat suitability increases beyond the range for this species, these cases highlight the complex ways in which microhabitat parameters may interact to mediate range shifts.The alternative scenarios, where an unfavorable microhabitat parameter either declines with elevation to inversely favor establishment or increases with elevation to inhibit establishment, can also facilitate or inhibit range shifts, respectively.
Macroclimatic parameters vary predictably with elevation (Fig. S8) in our study system, however only a third of our microhabitat variables show an elevational pattern.Since species distributions can be driven by microhabitat, understanding how microhabitat parameters vary across species' ranges can give insights into how microhabitat may facilitate or inhibit range expansion (Lembrechts et al., 2017;Tourville et al., 2022;Kemppinen et al., 2023).Incorporating microhabitat suitability into ecological studies is not new, including quantifying habitat suitability in the field (Jabis & Ayers, 2014), with remote sensing (Falco et al., 2019), and at the leading range edge (Mamet & Kershaw, 2013).Transplant studies often find decreasing macroclimate suitability beyond the range (Lee-Yaw et al., 2016), but to our knowledge no studies exist that assess microhabitat suitability beyond species range edges.Despite increasing work showing the necessity of including microhabitats in SDMs (Lembrechts, Nijs, et al., 2019), the vast majority of SDMs still focus on macroclimatic variables and this may lead to the mismatches between predicted and observed range shifts, or lack thereof.
In our system, we found that suitable microhabitat is unchanged or reduced beyond the leading edge for many species, yet we also find that macroclimatic conditions enable an upwards shift in species' recruitment optima at the community level (unpublished data ).However, this pattern is only evident at the community level, with high species-level variability and no effect of canopy cover.This matches the large variability, and low explanatory power, seen in our microhabitat suitability predictions, and suggests that species can find pockets of suitable microhabitat to recruit beyond their macroclimatic cold range edge.Microhabitat suitability may be just high enough to allow for successful establishment, but is overall very low and the complicated ways in which microhabitat acts to affect species establishment makes it difficult to detect patterns.We also found continued species recruitment at the warm edge of the range (unpublished data ) and this buffering of contractions at the warm edge could be due to microhabitat refugia (De Lombaerde et al., 2022) created by the range of microhabitat found at the lower elevation sites (Fig. 2)
Limitations
Our experimental design is not without limitations.Seed provenance might cause different recruitment responses across microhabitat gradients, however variation in germination rates was not biologically meaningful (Fig. S1).Since we sowed all seeds in the same year, we also cannot test if we sowed in favorable versus unfavorable years and aimed to capture a large germination window by recording recruitment over three years.Finally, we collected some of our data on species responses and microhabitat measurements asynchronously and therefore emphasize that only differences between sites, not absolute values, should be interpreted for their effects.Finally, we strongly urge future studies to incorporate increased replication at each microhabitat value for higher statistical power in explaining results.
Conclusion
To our knowledge, our work is one of the first studies to measure key demographic life stages of a large group of species along a microhabitat gradient both within and beyond their current range limit.As such, this work yields a more comprehensive understanding of the mechanisms that set species range limits and indicates ways in which species shift their distributions with climate change.Our results suggest that complex ways in which microhabitat parameters influence early life stages are complex and difficult to detect, which complicates range shift predictions.A greater understanding of the role of macrohabitat in shaping species' distribution limits will ultimately improve predictions of how species distributions will shift with climate change.We emphasize that predictions accounting for complex microhabitat drivers are necessary to create tailored conservation and management decisions in order to mitigate ongoing biodiversity loss.We show within (σ 2 within ) and among (σ 2 among ) site percent variation explained.These variance partitioning results are from Linear Mixed Models (LMMs) of the microhabitat parameter on the y-axis and block nested within site (microhabitat parameter ˜1 + (1|site/block).In cases of model fitting issues, we ran the LMM with only (1|site) as a random effect and thus only report among-site percent variation explained.We also fit LMMs to test the effects of quadratic elevation, transect, and their interaction on the microhabitat parameter indicated on the y-axis, with a random effect of site (microhabitat parameter ˜poly(elevation, 2) * transect + (1|site)).We show one fitted line for models for significant (alpha = 0.05) effects of elevation (line is shown for MB transect only), and two fitted lines for significant elevation*transect interaction even if elevation itself was not significant.Note that fitted lines do not include random effects.Legend for all plots is as in (a) with different colors indicating the warmer, wetter West (MB) transect or the cooler, drier East (RP) transect.Microhabitat suitability predicted for each plot across the elevation gradient of our study system for three representative species with sites beyond their current thermal range limit (below range limit = solid points; beyond = hollow points; see Fig. S2) varies by species and life stage.Predictions are generated from model averaging results testing the effects of microhabitat on likelihood of recruitment (a-c), relative recruit counts (d-f), and 1-year seedling survival (g-i).Note that these predictions are not based on elevation, and that y-axis scales differ.Points are colored by the warmer, wetter West (green) or cooler, drier East (black) transects and size of points corresponds to observed recruitment (a-c), relative recruit counts (d-f), or proportion of seedlings that survived to one year (g-i) in a plot.Lighter colors for both transects indicate plots without recruitment and thus this size does not reflect an observed value.Full predictions are shown in Fig. S7.
Tables Table 1.Model averaging results (or best model, when no other models were within Δ AIC < 2, indicated with '*') for assessing importance of different microhabitat variables in explaining likelihood of recruitment (i.e. did seeds germinate or not), relative recruit counts (i.e.how many seedlings recruited relative to maximum recruit counts for the species), and 1-or 2-year seedling survival.'+' indicates a positive parameter estimate and '" indicates a negative parameter estimate.Columns are colored by the expectation of those microhabitat variables being directly (blue) or indirectly (green) affected by climate change.Blank cells indicate that the parameter was not chosen in model selection.Because many of our soil loggers were compromised by animal disturbance, we only used those variables for species that showed no response at
Figure 2 .
Figure2.Both microhabitat parameters that are not expected to be directly affected by climate change (a-d) or expected to directly respond to climate change (e-i) differed in their within-site variability and show variable relationships with elevation.We show within (σ 2 within ) and among (σ 2 among ) site percent variation explained.These variance partitioning results are from Linear Mixed Models (LMMs) of the microhabitat parameter on the y-axis and block nested within site (microhabitat parameter ˜1 + (1|site/block).In cases of model fitting issues, we ran the LMM with only (1|site) as a random effect and thus only report among-site percent variation explained.We also fit LMMs to test the effects of quadratic elevation, transect, and their interaction on the microhabitat parameter indicated on the y-axis, with a random effect of site (microhabitat parameter ˜poly(elevation, 2) * transect + (1|site)).We show one fitted line for models for significant (alpha = 0.05) effects of elevation (line is shown for MB transect only), and two fitted lines for significant elevation*transect interaction even if elevation itself was not significant.Note that fitted lines do not include random effects.Legend for all plots is as in (a) with different colors indicating the warmer, wetter West (MB) transect or the cooler, drier East (RP) transect.
Figure 3 .
Figure 3. Microhabitat parameters identified as most important vary for likelihood of recruitment (a), relative recruit counts (b), 1-year seedling survival (c), and 2-year seedling survival (d).Percentages calculated from the total times a parameter was used in model selection for all species, with pink indicating a negative effect and green indicating a positive effect.Legend in all panels is as in (b).Note that each panel corresponds to a different number of models (see Tables 1, S3).Summer temperature and minimum soil moisture were only used in model selection for two species because of missing data at many sites.Parameter abbreviations are C:N -soil carbon:nitrogen; Canopy -canopy openness; F:B -soil fungus:bacteria; Region -transect; S Soil Moist -summer minimum soil moisture; S Temp -summer maximum soil temperature for (a), (b) and summer minimum plant-height temperature for (c), (d); Snow -spring days of snow cover; W Soil Temp -winter minimum soil temperature; WHC -water holding capacity, with 'ˆ2' indicating a quadratic effect.
Figure 4 .
Figure4.Microhabitat suitability predicted for each plot across the elevation gradient of our study system for three representative species with sites beyond their current thermal range limit (below range limit = solid points; beyond = hollow points; see Fig.S2) varies by species and life stage.Predictions are generated from model averaging results testing the effects of microhabitat on likelihood of recruitment (a-c), relative recruit counts (d-f), and 1-year seedling survival (g-i).Note that these predictions are not based on elevation, and that y-axis scales differ.Points are colored by the warmer, wetter West (green) or cooler, drier East (black) transects and size of points corresponds to observed recruitment (a-c), relative recruit counts (d-f), or proportion of seedlings that survived to one year (g-i) in a plot.Lighter colors for both transects indicate plots without recruitment and thus this size does not reflect an observed value.Full predictions are shown in Fig.S7. | 2024-06-22T15:48:27.608Z | 2024-06-19T00:00:00.000 | {
"year": 2024,
"sha1": "8c6238e9fcd5b25a6fe66bcaf302f153076cd3b5",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ecog.07144",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b7fd23dcc45a18c54b8c3ae0535191aa995375d4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
213559800 | pes2o/s2orc | v3-fos-license | Study on Distribution Law of Formation Water in S209 Well Area
The Shaan 209 area of the 2nd section of Yulin gas field is a water-rich area. The water production of most wells is too large, which seriously affects the production of gas wells in the well area. Based on the research foundation work area formation water chemistry characteristics and distribution of the testing, laboratory analysis based on existing formation water, combined with geochemical classification, analysis of the formation water type, cause, distribution of well Shaan 209 in Yulin south area, the main control factors of the formation water in the working area are identified. Therefore, it is demonstrated that the Shan 2 section of water-rich area still has development potential, which provides a basis for the optimization of management strategies in the rich water area, and lays a foundation for the scientific and efficient development of the Yulin South area.
Introduction
The upper Paleozoic of the Yulin gas field is dominated by the marine-continental transitional faciesinland lake basin deposits. From the bottom to the top, the Carboniferous Benxi Formation, Taiyuan Formation, Permian Shanxi Formation, Xiashihezi Formation, Shangshihe Formation and Shiqianfeng Formation were developed. The Shan 2 Member of the South Area of Yulin is a braided river delta sedimentary system consisting of two parts: the braided river delta plain and the braided river delta front subfacies [1,2]. Up to now, the proven gas area of Shan 2 in the self-supporting area of Yulin Gas Field is 773.59km 2 , the proven reserves are 719.21×10 8 m 3 , the proven gas area of Mawu 1+2 is 318.60km 2 , and the proved reserves are 138.18×10 8 m 3 . The total utilization of geological reserves in the southern area of Yulin Gas Field was 857.39×10 8 m 3 , and the production capacity of 20×10 8 m 3 was completed. Since the completion of the 2 billion-yuan/year capacity scale in 2005, Yulin South District has achieved 7-year natural production without any new capacity [3][4][5].
In order to prevent the advancement of the water body in the mountainous water-rich area, no perforation and reservoir reconstruction were carried out on the reservoir of the Shan 2 Member of the Shaan 209 well area in the southern part of Yulin, resulting in insufficient use of geological reserves, IOP Conf. Series: Earth and Environmental Science 384 (2019) 012064 IOP Publishing doi:10.1088/1755-1315/384/1/012064 2 and with the reduction of the enrichment area, This kind of low-grade reservoir is the trend of gas field to make up for the decline of production. However, gas reservoirs in rich waters have problems such as large investment, low output, and difficult management [6][7][8]. It has always been a difficult point in the development of gas fields at home and abroad. Whether the reservoir development of the Shan 2 Member of the Shaan 209 Well Area in the southern part of Yulin has value, and whether the watersaving development idea of "separation mining, water control and gas production" is still applicable, and how to optimize the management of gas wells in the rich waters needs to be re-evaluated.
Chemical characteristics and distribution of formation water
The statistics of water samples in the Shaan 209 well area in the south of Yulin gas field indicate that the mineralization degree of the edge (bottom) water sample in the research area is between 390 and 252,740 mg/L. According to the classification of 1000, 10000, and 100000, draw the plane distribution map of the total salinity of the Shaan 209 well area, as shown in Fig. 1 [9]. It can be seen from the figure that the distribution of the total salinity of the study area is characterized by a low north and a high south. Combined with the plane distribution map of chloride ion in the Shaan 209 well area in Fig. 2, it can be concluded that the produced water in the northeast part of the gas reservoir is condensed water, the middle part is mixed water, and the lower part is partially sealed formation water. It can be seen from the water pattern distribution map of the Shaan 209 well area (as shown in Fig. 3) that the formation water in the pores of the reservoir section of the Shaan 209 well area in the southern part of Yulin is mainly calcium chloride type water, indicating that the formation water is at Restore the environment and reflect the characteristics of deep stagnation [10].
From the pH plane distribution map (as shown in Fig. 4), the pH value of the formation water in the Shaan 209 well area in the south of the Yulin area is mostly between 5.3 and 7.2, indicating weak acidityneutrality. Generally speaking, the high salinity metamorphic water in the closed pressure environment for a long time in deep basin does not have acidic water, and it is mainly alkaline water or weakly acidic water. Even the water whose surface is dissolved and balanced, its PH value Also shows alkaline water characteristics between 7.0-8.68 [11]. The reason for this phenomenon may be that the dissolution of the formation water has not reached equilibrium, the residual organic acid content is high, the PH value is low, and it may also be related to the low PH value of coal seam water.
Formation water chemical characteristic parameters
For the reservoirs in the Shaan 209 well area, the formation sealing is better, and the Na + /Clvalues of the edge (bottom) water are concentrated between 0.1-0.35 (Fig. 5), which is consistent with the Boyarsky theory. The Na + /Clvalue of condensate water is unstable, mainly distributed between 0-1.5, indicating that it is greatly affected by other types of water. Residual fracturing fluid and gas layer residual formation water (including dense layer residual water) have a Na + /Clvalue distribution between 0 and 0.6. The residual layer water, condensate water and residual fracturing fluid in the gas layer have no special distinction between Na + /Clvalues, but overall, the Na + /Clvalues of the three types of water are higher than the edge (bottom) water. Fig. 6 shows the plane distribution of the ratio of the bottom water (Cl --Na + )/Mg 2+ in the Shaan 209 well area. The edge (bottom) water of the Shaan 209 well area, the ratio of (Cl --Na + )/Mg 2+ is 31.14-98.67, and the ratio of (Cl --Na + )/Mg 2+ of condensate water is between -107.77-83.12. The ratio of (Cl --Na + )/Mg 2+ in the residual formation water of the gas layer is between 3.72 and 78.11, and the ratio of (Cl --Na + )/Mg 2+ of the residual fracturing fluid is between 5.50 and 80.22. In general, the ratio of edge (bottom) water (Cl --Na + )/Mg 2+ is larger, and there is a tendency for the residual water from the edge (bottom) water to the gas layer to gradually decrease in the condensate. It can be seen from the Stiff diagram (as shown in Fig. 7 and Fig. 8) that the Stiff graph features from south (west) to north (east) are very similar. The edge (bottom) water is characterized by a large amount of Clin the anion, and a small content of SO4 2and HCO3 -, forming an anion on the Stiff diagram [12].
Distribution law of formation water
According to the existing formation water analysis and analysis, combined with geochemical classification, the formation water type, genesis and distribution law of Shaan 209 well area in Yulin South District were analyzed, and the main controlling factors of formation water in the construction area were clarified. Based on the results of logging interpretation, three main joint well profiles are plotted, reflecting the main distribution of gas-bearing water in the Shaan 209 well area, as shown in Fig. 9. It can be seen from the profile of gas-water distribution that the water produced in the Shaan 209 well area is mainly distributed near wells such as Yu50-3, Yu49-3B, Yu48-4 and Yu52-5. From the plane distribution map (Fig. 10), it can be reflected that the water production in the Shaan 209 well area is mainly distributed in the southwest. Figure 10. Distribution plan of production wells in Shaan 209 well area Water distribution characteristics of Shan 2 3-1 and Shan 2 3-2 small layers of main gas production. Fig. 11 shows the horizontal water level distribution of the small 2 3-1 mountain layer in the Shaan 209 well area. The water-rich area in the small layer of the Shan 2 3-1 covers an area of 78.2km 2 . The average effective thickness was 5.7m, the average porosity was 5.5%, and the average water saturation was 51%. Take 20% of irreducible water saturation; The water size of Shan 2 3-1 small layer is 760×10 4 m 3 ; Considering the comprehensive compressibility is 0.001; The size of movable water is 19.0×10 4 m 3 .
The area of the Shan 2 3-2 small layer water-rich area is 84.1km 2 ; the average effective thickness is 5.4m, the average porosity is 5.7%, the average water saturation is 52.2%; the irreducible water saturation is 20%; the mountain 23-2 small layer water is 833.5 ×10 4 m 3 ; considering the comprehensive compression coefficient is 0.001; the movable water size is 20.8×10 4 m 3 .
Conclusion
The distribution of total salinity in the Shaan 209 well area shows the characteristics of low north and high south. Combined with the analysis of the plane characteristics of chloride ion, the produced water in the northeast part of the gas reservoir is condensed water, the middle part is mixed water, and the lower part is partially sealed formation water. The cations in the formation water are mainly Na + and Ca 2+ , and the Mg 2+ is relatively small, the anion is mainly Cl -, the formation water is mainly calcium IOP Conf. Series: Earth and Environmental Science 384 (2019) 012064 IOP Publishing doi:10.1088/1755-1315/384/1/012064 6 chloride type water, and the formation water is in a reducing environment, which reflects the characteristics of deep stagnation. The water produced in the Shaan 209 well area is mainly distributed near the well areas such as Yu50-3, Yu49-3B, Yu48-4 and Yu52-5, and the location is mainly distributed in the southwest. Combining the reservoir physical properties of each well area, the calculation of the movable water body of Shan 2 3-1 and Shan 2 3-2 is carried out. The size of the small water body of Shan 2 3-1 is 760×10 4 m 3 , and the movable water size is 19.0×10 4 m 3 . The size of the small water body of Shan 2 3-2 is 833.5×10 4 m 3 , the movable water size is 20.8×10 4 m 3 ; The water body of the Shan 2 3 layer which is the main producing layer of the Shaan 209 well area is 1593.5×10 4 m 3 , while the movable water body is only 39.8×10 4 m 3 , and the movable water body is small. | 2019-12-05T09:37:30.636Z | 2019-11-29T00:00:00.000 | {
"year": 2019,
"sha1": "c49803156ebba37b9bf3b1956ec88637775352ed",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/384/1/012064",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "91bcae1c063df5abdb9a286436718d78315b3f93",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
52834299 | pes2o/s2orc | v3-fos-license | Evaluating the performance of gene-based tests of genetic association when testing for association between methylation and change in triglyceride levels at GAW20
Although methylation data continues to rise in popularity, much is still unknown about how to best analyze methylation data in genome-wide analysis contexts. Given continuing interest in gene-based tests for next-generation sequencing data, we evaluated the performance of novel gene-based test statistics on simulated data from GAW20. Our analysis suggests that most of the gene-based tests are detecting real signals and maintaining the Type I error rate. The minimum p value and threshold-based tests performed well compared to single-marker tests in many cases, especially when the number of variants was relatively large with few true causal variants in the set.
Background
Methylation data continues to grow in popularity owing to both its increasing availability (decline in cost) and biological relevance, a result of increasing hypotheses about the contribution of epigenetic effects to the genetic architecture of common human diseases. This rapid rise in popularity has meant that there are few "best practices" for the analysis of genome-wide epigenetic data. However, many of the current analytic approaches for methylation data are informed by the more mature field of genome-wide association studies (GWAS).
For many years, the use of multimarker tests of genetic association has been a popular alternative to single-marker tests in GWAS. Multimarker tests have the potential ability to aggregate weaker individual signals across a biologically related set of markers, reduce the substantial multiple testing penalties required for GWAS, and directly connect statistical testing with functional biological units (eg, genes or other meaningful sets). The rise in the popularity of next-generation sequencing data and the subsequent ability to easily and inexpensively measure rare genetic variants has made multimarker tests a necessity by requiring the aggregation of signals from rare variants in order to improve statistical power to a reasonable level.
Prior work by our group [1,2], and many others [3,4], evaluated numerous strategies for summarizing marker-level genetic association statistics across biologically informed sets. For example, burden tests are well known to lose power when testing sets of markers containing both risk-increasing and risk-decreasing variants, whereas variance components tests are robust to these situations and mixtures of both methods can sometimes yield "optimal" power [2,3]. We have identified a test statistic that is particularly robust to situations where the majority of markers in the set are noncausal, which can be near optimal when combined with a variance components test [1].
In this paper, we evaluate the application of novel gene-based tests of association when analyzing simulated genome-wide methylation data as compared to single-marker tests. We choose test statistics and evaluate their behavior in light of recent methodological results on gene-based tests for rare genetic variants (see previous paragraph). We evaluate the performance of novel gene-based tests across different simulated data sets provided as part of GAW20, and as compared to direct application of "standard" single-marker testing approaches.
Sample population and variables
We analyzed the simulated data set provided as part of GAW20 and were aware of the "answers" (simulation parameters) when conducting this analysis.
Models
We used a 2-stage modeling process. The first stage resulted in 200 models (one for each of the 200 simulations provided). The second stage resulted in 654,755 models (one for each single-nucleotide polymorphism [SNP] that passed standard GWAS quality control [QC] criteria: Hardy-Weinberg Equilibrium p value > 1 × 10 − 6 , minor allele frequency > 1%, SNP missing data rate < 5%).
The lmekin function from the coxme package in R [5] was used to predict the change in log-transformed TG levels (y = ln(followup) − ln(baseline)). In cases where two separate TG measurements were available for either follow-up or baseline, we natural-log (ln)-transformed the data before averaging. Change in ln-transformed TG levels was predicted by the 7 covariates listed earlier, baseline ln-transformed TG levels, and the familial relationships in the model (which were accounted for through the use of the kinship matrix). For each of the 200 simulations, we then saved the resulting "residual" value ( r i ¼ŷ i −y i ) for each of the i = 1,…,670 individuals in our analysis.
The second stage predicted the residuals ( r 0 i s from stage 1 based on the number of minor alleles (SNP j = 0, 1, 2) and methylation scores (CPG j ∈ [0, 1]) along with an interaction term between SNP j and CPG j , with a separate model for each SNP j , CPG j pair. In particular, the second stage model for the SNP j , CPG j pair was: SNP j , CPG j pairs were made by pairing each SNP passing QC to its nearest cytosine-phosphate-guanine (CpG) site resulting in 654,755 pairs, with some CpG sites assigned to multiple SNPs. The only exception to this pairing strategy was for 3 SNPs with major effects (see next paragraph for details) which were assigned to the "causal" CpG site, which was not necessarily the nearest CpG (in all cases these were within 12,500 bp). We note that the model in eq. (1) is informed by the true simulated data model for the data provided as part of GAW20, in which SNP effects are moderated by methylation of nearby CPG sites.
Gene selection
Our analyses focused on 3 distinct subsets of genes. First, the GAW simulated data set includes 5 genes (hereafter, major effect genes) containing (or within 50,000 bp of ) a causal SNP with heritabilities of 0.025, 0.05, 0.075, 0.10, and 0.125. Second, the GAW simulated data set contains 34 genes containing exactly 1 causal SNP with heritability of 0.001 (hereafter, minor effect genes). Third, we randomly selected 39 other genes from the remaining list of 16,604 genes not containing causal variants (hereafter, noncausal genes). Thus, a total of 78 genes were considered in our analyses.
Sets of SNPs were assembled for each gene, k = 1,…,78. In particular, for most genes, all SNPs contained within the start-stop positions of the gene (based on human genome build 18 [hg18]) were considered "part of" the gene. The exceptions to this were 3 major-effect SNPs that were not located within a gene. In 2 cases, the causal SNP was within 50 kb of the nearest gene and so was added to the set of SNPs within the gene (SIPA1L2 and MSRB2). In the final case, where the nearest major-effect SNP was not within 50 kb of the nearest gene, we created a synthetic gene that included the SNPs within 50 kb of the SNP (SYNTH1).
Gene sets
We also considered 5 sets of variants that were not solely defined by gene boundaries. One of these sets (CAUSAL5) consists of only the 5 causal variants with heritabilities of 0.025 or larger (major effect genes) (to act as a positive control). Two sets, UNION5 and UNION2, are, respectively, the union of all 5 causal genes and the union of LYRM4 and HS3ST3A1, and thus contain 5 and 2 causal variants, respectively. NOISE5 and NOISE2 also have 5 and 2 causal variants, respectively, but the rest of the variants are either noncausal or minor causal.
Gene-based test statistics
We evaluated 6 structurally different gene-based test statistics in addition to a "standard" single-marker test. For each gene, k, a new statistic, G, was created by using various methods of combining the p value from the F-statistic test on the overall model significance of eq. (1), over all m SNP-CpG sites assigned to the gene. Thus, m distinct p j values were combined into a single value (G j ). Table 1 shows the 6 methods we used to compute G as a function of p.
Choices of G were informed by prior research (see Background for details). In brief, the sum of ln p is informed by Fisher's method for combining tests and burden tests (although robust to different effect direction), sum of squared ln p is informed by variance components tests, and min p is informed by recent research on test statistics highly robust to large proportions of nonassociated statistics. We proposed 3 threshold-based tests that attempt to put a threshold on the "noise" of noncausal SNPs through a p value threshold of either 0.01, 0.05, or 0.10. We used negative ln-transformations of p in line with prior research (eg, Fisher's combined probability test). The benefit of the threshold approach is that any p j above the threshold value will have no effect on the summation across the m SNP-CpG sites. Thus all SNP-CpG sites that would be considered not statistically significant on their own at the threshold level will contribute nothing to G, while other SNP-CpG sites will contribute according to the square of the natural log of their scaled p value.
Permutations
Permutations were used to assess the statistical significance of G. Briefly, the residual values from stage 1 were computed separately for each individual in each simulation. These residual values were permuted and then the permuted residual values were used to generate permuted β values in stage 2. We did 1000 permutations for each simulation considered, making sure to reuse the same shuffles for each SNP-CpG pair to preserve correlation structure between and across CpG sites and SNPs within each gene. Empirical p values were computed as the proportion of permuted values of G, which were more extreme than the observed value of G. We used a significance level of 0.05 for all tests, except single-marker tests which used a significance level of 0:05 m k where m k represents the number of SNPs, m in gene (or set) k, representing a candidate gene significance level.
Performance across 200 simulations
In Table 2, performance for each gene-based test statistic, G SC , is provided, stratified by whether a gene (or set) contained 1 or more major causal variant, minor causal variants, or no causal variants. Performance is assessed by computing the proportion of genes with p values less than 0.05 across all genes and simulations, except for single-marker p values, which were evaluated using a Bonferroni-corrected significance threshold of 0:05 m k where m k represents the number of SNPs, m in gene (or set) k. For single-marker tests, genes containing 1 or more SNPs with a p value below the threshold were deemed significant. Table 2 illustrates reasonable control of the false-positive error rate as all methods detected less than 5% of genes containing no causal variants as significant. Genes containing minor causal variants were only detected slightly more frequently than genes containing no causal variants, and so we focus the remainder of our analysis on genes containing major causal variants.
Tables 3 and 4 highlight the power of each SNP-CpG statistic, G SC , across the 5 major effect (Table 3) and synthetically created sets of SNP-CpG pairs (Table 4). Table 3 demonstrates that for genes containing only a single, highly heritable variant single-marker methods perform reasonably well compared to gene-based methods. In 3 of the 5 cases (SIPA1L2, LYRM4, and HS3ST3A1), one or more of the threshold-based approaches (pT) and min p methods outperformed or performed similarly to single-marker methods, but averaging methods (sum of ln p and sum of squared ln p) performed comparably (HS3ST3A1 and LYRM4) or worse (SIPA1L2). In 2 cases (SYNTH1 and MSRB2), averaging methods outperformed the other methods, with threshold methods performing next best followed by min p, and single-marker methods performing worst. The pT 0.01 and min p methods outperformed single-marker methods in all 5 cases.
As seen in Table 4, all methods performed well on a set containing only causal variants with high heritability (CAUSAL5), but once noncausal variants were added, the aggregating methods outperformed single-marker method (UNION5, NOISE5). A similar pattern was observed with sets containing 2 causal variants (UNION2 and NOISE2).
Discussion and conclusions
To date, few papers have considered multimarker (gene-based) approaches for methylation data. Our proposed approach to the aggregation of statistical evidence of phenotypic association across multiple SNP-CpG pairs serves as a proof-of-concept of this approach in candidate gene analyses investigating the moderating effects of methylation. In particular, in a candidate gene, versus genome-wide, context significance levels are higher and in line with those used here (0.05). Our analysis demonstrates reasonable false-positive rates, and generally good performance of multimarker methods on sets containing SNP-CpG sites with reasonably large effects. As is often the case in practice, the ability to detect markers with low heritability remains challenging.
In general, the patterns seen for the performance of multimarker tests of SNP-CpG pairs follow those for SNP-variant-based analysis methods. In particular, sets with lower numbers of variants and only a single causal variant were challenging for multimarker methods to detect, although averaging methods tended to outperform threshold-based and the min p methods. As the number of variants increased, threshold-based and the min p methods tended to outperform averaging type multimarker tests. As the number of causal variants in the set increased, multimarker tests performed better than single-marker tests. The threshold-based testing approaches are a reasonably novel approach to multimarker testing, and performed reasonably well as a robust intermediary to the min p method (optimized for large numbers of variants when few are causal) and averaging methods (sum of ln p and sum of squared ln p) (optimized for lower numbers of variants with multiple causal variants).
The GAW20 simulated data set only contained 200 simulations, and so our analysis was limited in the ability to draw broad conclusions about power and Type I error. Further work is needed to explore the widespread control of Type I error and power of multimarker tests for methylation data in more wide-ranging simulated data sets and in a genome-wide testing situation (lower significance levels). We also note that our choice to use a linear model containing an interaction term between methylation (CpG) and SNP was informed by the simulation model used in GAW20. While serving as a proof-of-concept for the multimarker analysis of Artificial "gene" containing all SNPs within 50,000 bp of major effect SNP methylation data, in practice, the test statistic used should be informed by the hypothesized biological mechanism of the effect of methylation. The model used here is a reasonable, although not necessary, hypothesis of this effect. Further work is needed to investigate other models and the performance of multimarker methods in those settings. Our results suggest the use of gene-based tests when investigating methylation-SNP impact on phenotypes; however, further testing is needed in more wide-ranging and comprehensive simulation settings.
Funding
Publication of this article was supported by NIH R01 GM031575.
Availability of data and materials
The data that support the findings of this study are available from the Genetic Analysis Workshop (GAW), but restrictions apply to the availability of these data, which were used under license for the current study. Qualified researchers may request these data directly from GAW. | 2018-09-23T17:28:41.794Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "b4d0fbd4278645512d14a75257d8f7ab8b3265eb",
"oa_license": "CCBY",
"oa_url": "https://bmcproc.biomedcentral.com/track/pdf/10.1186/s12919-018-0124-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4d0fbd4278645512d14a75257d8f7ab8b3265eb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17610697 | pes2o/s2orc | v3-fos-license | A Comprehensive Study of Large Scale Structures in the GOODS-SOUTH Field up to z \sim 2.5
The aim of this paper is to identify and study the properties and galactic content of groups and clusters in the GOODS-South field up to z\sim2.5, and to analyse the physical properties of galaxies as a continuous function of environmental density up to high redshift. We use the deep (z850\sim26), multi-wavelength GOODS-MUSIC catalogue, which has a 15% of spectroscopic redshifts and accurate photometric redshifts for the remaining fraction. On these data, we apply a (2+1)D algorithm, previously developed by our group, that provides an adaptive estimate of the 3D density field. We support our analysis with simulations to evaluate the purity and the completeness of the cluster catalogue produced by our algorithm. We find several high density peaks embedded in larger structures in the redshift range 0.4-2.5. From the analysis of their physical properties (mass profile, M200, \sigmav, LX, U-B vs. B diagram), we derive that most of them are groups of galaxies, while two are poor clusters with masses of few times 10^14 Mo. For these two clusters we find, from the Chandra 2Ms data, an X-ray emission significantly lower than expected from their optical properties, suggesting that the two clusters are either not virialised or gas poor. We also analyse the dependance on environment of galaxy colours, luminosities, stellar masses, ages and star formations. We find that galaxies in high density regions are, on average, more luminous and massive than field galaxies up to z\sim 2. The fraction of red galaxies increases with luminosity and with density up to z\sim 1.2. At higher z this dependance on density disappears. The variation of galaxy properties as a function of redshift and density suggests that a significant change occurs at z\sim 1.5-2. (abridged)
Introduction
The study of galaxy clusters and of the variation of galaxy properties as a function of the environment are fundamental tools to understand the formation and evolution of the large scale structures and of the different galaxy populations, observed both in the local and in the high redshift Universe. The effects of the environment on galaxy evolution have been studied at progressively higher redshifts through the analysis of single clusters (e.g. Treu et al. 2003;Nakata et al. 2005;Tran et al. 2005;Mei et al. 2006;Menci et al. 2008;Rettura et al. 2008), as well as studying the variation of galaxy colours, morphologies and other physical parameters as a function of projected or 3-dimensional density (e.g. Dressler et al. 1997;Blanton et al. 2005;Cucciati et al. 2006;Cooper et al. 2007;Elbaz et al. 2007). Moreover, the analysis of cluster properties at different wavelenghts provides interesting insights into the matter content and evolutionary histories of these structures (Lubin et al. 2004;Rasmussen et al. 2006;Popesso et al. 2007).
Send offprint requests to: S. Salimbeni, e-mail: salimben@astro.umass.edu A variety of survey techniques have proved effective at finding galaxy clusters up to z∼ 1 and beyond. X-ray selected samples at z ≥ 1 probe the most massive and dynamically relaxed systems (e.g., Maughan et al. 2004;Stanford et al. 2006;Bremer et al. 2006;Lidman et al. 2008). Large-area multicolour surveys, such as the red-sequence survey (e.g. Gladders & Yee 2005), have collected samples of systems in a range of evolutionary stages. The mid-IR cameras on board of the Spitzer and Akari satellites has extended the range and power of multicolour surveys, producing confirmed and candidate clusters up to z ∼ 1.7 (Stanford et al. 2005;Eisenhardt et al. 2008;Goto et al. 2008). However most of the previous techniques present some difficulties in the range 1.5 < z < 2.5, where we expect to observe the formation of the red sequence and the first hints of colour segregation (Cucciati et al. 2006;Kodama et al. 2007). Searching for extended X-ray sources becomes progressively more difficult at large distances, because the surface brightness of the X-ray emission fades as (1 + z) 4 . The sensitivity of surveys exploiting the Sunyaev-Zeldovich (SZ) effect is, at present, not sufficient to detect any of known clusters at z > 1 (Carlstrom et al. 2002;Staniszewski et al. 2008). Finally, the detection of galaxy overdensities on surveys using two-dimensional algorithms, requires additional a priori assumptions on either galaxy luminosity function (LF), as in the Matched Filter algorithm (Postman et al. 1996), or relies on the presence of a red sequence (Gladders & Yee 2000). Biases produced by these assumptions can hardly be evaluated at high redshift.
In this context, photometric redshifts obtained from deep multi-band surveys for large samples of galaxies, though having a relatively low accuracy if compared to spectroscopic redshifts, can be exploited to detect and study distant structures.
In the past few years, several authors (e.g. Botzler et al. 2004;van Breukelen et al. 2006;Scoville et al. 2007;Zatloukal et al. 2007;Mazure et al. 2007;Eisenhardt et al. 2008) have developed or extended known algorithms to take into account the greater redshift uncertainties. We have developed a new algorithm, that uses an adaptive estimate of the 3D density field, as described in detail in Trevese et al. (2007). This method combines galaxy angular positions and precise photometric redshifts to estimate the galaxy number-density and to detect galaxy overdensities in three dimensions also at z > 1, as described in Sect.
3.
Our first application to the K20 survey (Cimatti et al. 2002) detected two clusters at z∼0.7 and z∼1 ), previously identified through spectroscopy (Gilli et al. 2003;Adami et al. 2005). We then applied the algorithm to the much larger GOODS-South field, and in Castellano et al. (2007), elsewhere C07, we reported our initial results, i.e. the discovery of a forming cluster of galaxies at z∼ 1.6.
In this paper we present the application of the algorithm to the entire GOODS-South area (∼ 143 arcmin 2 ), using the GOODS-MUSIC catalogue (Grazian et al. 2006a) up to z ∼ 2.5, to give a comprehensive description of the large scale structures in this field, with a detailed analysis of the physical properties of each high density peak. We also study the physical properties of galaxies as a function of environmental density up to redshift 2.5, higher than previous similar studies (e.g., Cucciati et al. 2006;Cooper et al. 2007).
To validate our technique, we analysed the completeness and purity of our cluster detection algorithm, up to z ∼ 2.5, through its application to a set of numerically simulated galaxy catalogues. Besides allowing an assessment of the physical reality of the structures found in the GOODS field, this analysis provides the starting point to test the reliability of the algorithm in view of our plan to apply it to photometric surveys of similar depth but covering much larger areas.
The paper is organised as follows: in Sect. 2, we describe the basic features of our dataset. In Sect. 3, we summarise the basic features of the (2+1)D algorithm used in our analysis, and compare it with other methods based on photometric redshifts. In Sect. 4, we show the results of the application of our method to simulated data. In Sect. 5, we present the catalogue of the structures detected and the derived physical properties. In Sect. 6, we study the colour magnitude diagrams of the detected structures. In Sect. 7, we analyse the physical properties of galaxies as a continuous function of environmental density.
All the magnitudes used in the present paper are in the AB system, if not otherwise declared. We adopt a cosmology with Ω Λ = 0.7, Ω M = 0.3, and H 0 = 70 km s −1 Mpc −1 .
The GOODS-MUSIC catalogue
We used the multicolour GOODS-MUSIC catalogue (GOODS MUlticolour Southern Infrared Catalogue; Grazian et al. 2006a). This catalogue comprises information in 14 bands (from U band to 8µm) over an area of about 143.2 arcmin 2 . We used the z 850selected sample (z 850 ∼ 26), that contains 9862 galaxies (after excluding AGNs and galactic stars). About 15% of the galaxies in the sample have spectroscopic redshift, and for the other galaxies we used photometric redshifts obtained from a standard χ 2 minimisation over a large set of spectral models (see e.g., Fontana et al. 2000). The accuracy of the photometric redshift is very good, with a r.m.s. of 0.03 for the ∆z/(1 + z) distribution up to redshift z = 2. For a detailed description of the catalogue we refer to Grazian et al. (2006a).
The method we applied to estimate the rest-frame magnitudes and the other physical parameters (M, SFR, age) is described in previous papers (e.g., Fontana et al. 2006). Briefly, we use a χ 2 minimisation analysis, comparing the observed SED of each galaxy to synthetic templates, and the redshift is fixed during the fitting process to the spectroscopic or photometric redshift derived in Grazian et al. (2006a). The set of templates is computed with standard spectral synthesis models (Bruzual & Charlot 2003), chosen to broadly encompass the variety of star formation histories, metallicities and extinctions of real galaxies. For each model of this grid, we compute the expected magnitudes in our filter set, and find the best-fitting template. From the best-fitting template we obtain, for each galaxy, the physical parameters that we use in the analysis. Clearly, the physical properties are subject to uncertainties and biases related to the synthetic libraries used to fit the galaxy SEDs; however, as shown in Fontana et al. (2006), the extension of the SEDs to mid-IR wavelengths with IRAC tends to reduce the uncertainties on the derived stellar masses. For a detailed analysis of the uncertainties on the physical properties we refer to our previous papers Grazian et al. 2007). In the present work we also make use of the 2Ms X-ray observation of the Chandra Deep Field South presented by Luo et al. (2008) and of the catalogue of VLA radio sources (1.4 GHz) on the CDFS compiled by Miller et al. (2008).
The (2+1)D algorithm for the density estimation
To estimate a three dimensional density, we developed a method that combines the angular position with the photometric redshift of each object. The algorithm is described in detail in Trevese et al. (2007): here we outline its main features and, in the next section, we present the simulations used to estimate its reliability.
The procedure is designed to automatically take into account the probability that a galaxy in our survey is physically associated to a given overdensity. This is obtained by computing the galaxy densities in volumes whose shape is proportional to positional uncertainties in each dimension (α, δ and z).
First, we divide the volume of the survey in cells whose extension in different directions (∆α, ∆δ, ∆z) depends on the relevant positional accuracy and thus are elongated in the radial direction. We choose the cell sizes small enough to keep an acceptable spatial resolution, while avoiding a useless increase of the computing time. We adopt ∆z = 0.025 (radial direction) and ∆α = ∆δ ∼ 2.4 arcsec in transverse direction, the latter value corresponds to ∼ 30, 40 and 60kpc (comoving), respectively at z ∼ 0.7, ∼ 1.0 and ∼ 2.0.
For each cell in space we then count neighbouring objects in volumes that are progressively increased in each direction by steps of one cell, thus keeping the simmetry imposed by the different intrinsic resolution. When a number n of objects is reached we assign to the cell a comoving density ρ = n/V n , where V n is the comoving volume which includes the n-nearest neighbours. Clusters would be better characterized by their proper density since they already decoupled from the Hubble flow, however we notice that the average uncertainty on photometric redshifts, that grows with redshift as (1 + z), forces us to measure densities in volumes that are orders of magnitude larger than the real volume of a cluster, even at low-z. Thus we decided to measure comoving densities, that have the further advantage of giving a redshift-independent density scale for the background. We fix n = 15 as a trade off between spatial resolution and signal-to-noise ratio. Indeed, through simulation described in Sect. 4, we verified that a lower n would greatly raise the high frequency noise in the density maps, thus increasing the contamination from false detections in the cluster sample, even at low redshift ('purity' parameter in Tab. 1). In the density estimation, we assign a weight w(z) to each detected galaxy at redshift z, to take into account the increase of limiting absolute magnitude with increasing redshift for a given apparent magnitude limit. We choose w(z) = 1/s(z), where s(z) is the fraction of objects detected with respect to a reference redshift z c below which we detect all objects brighter than the relevant M c ≡ M lim (z c ): where Φ(M) is the redshift dependent galaxy luminosity function computed on the same GOODS-MUSIC catalogue (Salimbeni et al. 2008), M lim (z) is the absolute magnitude limit at the given redshift z, corresponding to the apparent magnitude limit m lim of the survey, which depends on the position (see Grazian et al. 2006a). We use this correction to obtain a density scale independent of redshift, at least to a first approximation. In computing M lim (z), we use K -and evolutionary -corrections for each object computed with the same best fit SEDs used to derive the stellar masses, the rest-frame magnitudes, and the other physical properties.
We apply this algorithm to data from the GOODS-MUSIC catalogue, in a redshift range from z ∼ 0.4 to z ∼ 2.5, where we have sufficient statistic. We perform this analysis selecting galaxies brighter than M B = −18 up to redshift 1.8 and brighter than M B = −19 at higher redshift, to minimise the completeness correction described above, keeping the average weight w(z) below 1.6 in all cases.
Using this comoving density estimate we analyse the field in two complementary ways. First, we detect and study galaxy overdensities, i. e. clusters or groups (see Sect. 5), defined as connected 3-dimensional regions with density exceeding a fixed threshold and a minimum number of members chosen according to the results of the simulations (Sect. 4). In particular, we isolate the structures as the regions having ρ >ρ + 4σ on our density maps and at least 5 members. We then consider as part of each structure the spatially connected region (in RA, DEC, and redshift) around each peak, with an environmental density of > 2σ above the average and at least 15 member galaxies. To avoid spurious connections between different structures at the same redshift, we consider regions within an Abell radius from the peak. The galaxies located in this region are associated with each structure. We then study the variation of galaxy properties as a function of environmental density (Sect. 7), associating to each galaxy in the sample the comoving density at its position.
Comparison with other methods based on photometric redshifts
As mentioned in the introduction, other methods, based on photometric redshifts, have been developed for the detection of cosmic structures. Here we present the main differences between our algorithm and those which appeared most recently in the literature. However, a more detailed comparison is beyond the scope of the present work, since it would require extensive simulations and/or the application of the different methods to the same datasets. A similar three-dimensional approach has been proposed by Zatloukal et al. (2007). They select cluster candidates detecting excess density in the 3D galaxy distribution reconstructed from the photometric redshift probability distributions p(z) of each object. However, at variance with our method they do not adopt any redshift dependent correction for their estimated density, since they analyze only a small redshift range. As we outlined in the previous section, such correction is needed to provide a redshift independent density scale in a deep sample as the GOODS-MUSIC. Botzler et al. (2004) expanded the well known Friends of Friends (FoF) algorithm (Huchra & Geller 1982), to take into account photometric redshift uncertainties. This method links together groups of individual galaxies if their redshift difference and angular distances are below fixed thresholds. These thresholds depend on the photometric redshift uncertainties, which are greater than the average physical distance between galaxies and also greater than the velocity dispersion of rich clusters. This could induce the problem of structures percolating through excessively large volumes. They dealt with this issue dividing the catalogue in redshift slices. Instead of comparing the distance between galaxy pairs, as done in a FoF approach, we use the statistical information of how many galaxies are in the neighbourhood of a given point to estimate a physical density. This approach can avoid more effectively the percolation problem, since it identifies structures from the 4σ density peaks whose extension is limited by the fixed threshold in density.
Several authors, e.g. Scoville et al. (2007), Mazure et al. (2007), Eisenhardt et al. (2008) and van Breukelen et al. (2006) estimated the surface density in redshift slices, each with different methods: the first two use adaptive smoothing of galaxy counts, Eisenhardt et al. (2008) analyse a density map convolved with a wavelet kernel, while the last author adopts FoF and Voronoi tessellation (Marinoni et al. 2002). At variance with these, we prefer to adopt an adaptive 3D density estimate to consider, automatically, distances in all directions and the relevant positional accuracies at the same time. This approach requires longer computational times, but allows for an increased resolution in high density regions where the chosen number of objects is found in a smaller volume with respect to field and void regions. As a consequence it also avoids all peculiar "border" effects given by the limits of the redshift slices, and there is also no need to adopt additional criteria to decide whether an overdensity, present in two contiguous 2D density maps in similar angular positions, represents the same group or not (as done for example by Mazure et al. 2007). This clearly also depends on the ability of the algorithm to separate aligned structures (for a more detailed discussion of this see Sect. 4).
Finally, another important difference with respect to previous methods is in the way we use the photometric redshift: some authors used best fit values of photometric redshift, e.g. Mazure et al. (2007) 2007) and Eisenhardt et al. (2008) consider the full probability distribution function (PDF) to take into account redshift uncertainties. As discussed by Scoville et al. (2007), this last method could tend to preferentially detect structures formed by early type galaxies, since they have smaller photometric redshift uncertainty, thanks to their stronger Balmer break, when this feature is well sampled in the observed bands. We are less biased in this respect, since we consider the photometric redshift uncertainty in a conservative way, choosing only the maximum redshift range where we count neighbour galaxies to associate with each cell. We took this range as ±2 · σ z around the redshift of each cell, where σ z = 0.03 · (1 + z) (Grazian et al. 2006a) is the average accuracy of the photometric redshift in the range we analyze.
Simulations
We estimate the reliability of our cluster detection algorithm by testing it on a series of mock catalogues, designed to reproduce the characteristics of the GOODS survey. These mock catalogues are composed by a given number of groups and clusters superimposed on a random (poissonian) field. While this is a rather simplistic representation of a survey, it allows us to evaluate some basic features of our algorithm, without the use of Nbody simulations. We expand the previous simulations presented in Trevese et al. (2007), using a larger number of mock catalogues and adopting a more consistent treatment of the survey completeness. For each redshift, we calculate the limiting absolute B magnitude for the two populations of "red" and "blue" galaxies, defined from the minima in the U-V vs. B distribution in Salimbeni et al. (2008), using the average type-dependant Kand evolutionary corrections calculated from the best-fit SED of the objects in the real catalogue. We then generate an "observed" mock catalogue of field galaxies randomly distributed over an area equal to that of the GOODS-South survey. At each redshift, the number of objects in the catalogue is obtained from the integral of the rest frame B band luminosity function Φ(M B , z) derived in Salimbeni et al. (2008), up to the limiting absolute M B (z) magnitude computed as described above. Finally, we create different mock catalogues superimposing a number of structures on the random fields. Given the rela- Table 3. Separation threshold for aligned groups.
tively small comoving volume sampled by the survey, we expect to find only groups and small clusters with a total mass M ∼ 10 13 −10 14 M ⊙ and a number of members corresponding to the lowest Abell richness classes (Girardi et al. 1998a). To check that the performance of the algorithm does not change appreciably with a varying number of real overdensities of this kind, we perform three different subset of simulations. Each subset is based on the analysis of 10 mock catalogues, with a number of groups equal to the number of M > 10 13 M ⊙ , M > 2 × 10 13 M ⊙ and M > 3 × 10 13 M ⊙ DM haloes, obtained by integrating the Press & Schechter function (Press & Schechter 1974) over the comoving volume sampled by the survey. Their positions in real space are chosen randomly. Cluster galaxies follow a King-like spatial distribution n(r) ∝ [1 + (r/r c ) 2 ] −3/2 (see Sarazin 1988) with a typical core radius r c = 0.25M pc.
To take into account the uncertainty on photometric redshifts, to each cluster we assign galaxy a random redshift extracted from a gaussian distribution centred on the cluster redshift z cl and having a dispersion σ z = 0.03 · (1 + z cl ). We neglect the cluster real velocity dispersion, which is much smaller than the z phot uncertainty. We analyse the simulations in the same way as the real catalogue, i.e. calculating galaxy volume density considering objects with M B ≤ −18 at z < 1.8 and objects with M B ≤ −19 at z ≥ 1.8.
We evaluate the completeness of the sample of detected clusters (fraction of real clusters detected) and its purity (fraction of detected structures corresponding to real ones) at different redshifts (see Table 1). We present also the number of unresolved pairs (a detected structure corresponding to two real ones) and the number of double identifications (a unique real structure separated in two detected ones).
Our aim is to study the properties of individual structures and not, for example, to perform group number counts for cosmological purposes. For this reason, we prefer to choose conservative selection criteria in order to maximise the purity of our sample, while still keeping the completeness high. We isolate the structures as described in Sect. 3, and we consider as significant only those overdensities with at least 5 members in the 4σ region and 15 members in the 2σ region.
A structure in the input catalogue is identified if its center is within ∆r = 0.5M pc projected distance, and within ∆z = 0.1, from the center of a detected structure, for the low redshift sample and ∆r = 0.8M pc, ∆z = 0.2 at high z (to account for the increased uncertainties in redshift and position). The results are reported in Table 1. We can see that the chosen thresholds and selection criteria allow for a high purity (∼ 100%) at z < 1.8, still detecting about the 80% of the real structures. At z > 1.8, given the greatly reduced fraction of observed galaxies, the noise is higher and these criteria turn out to be very conservative (therefore the completeness is low) but are necessary to keep a low number of false detections (purity ∼ 75 − 80%). Table 2 shows the average distance between the centres of the real structures and the centres of their detected counterparts. The density peaks allow to identify the positions of real groups with a good accuracy.
We also evaluate the ability of the algorithm to separate real structures that are very close both in redshift and angular position. In Table 3 we present, for different intracluster distances, the density level at which couples of real groups appear as separated peaks. Both at low and high redshift it is not possible to separate structures whose centres are closer than 1.0 Mpc on the plane of the sky and 2σ z in redshift. For larger separations, using higher thresholds (5 or 6 σ above the average ρ) it is possible to separate the groups.
A catalogue of the detected overdensities in the GOODS-South field
An inspection of the 3-D density map shows some complex high density structures distributed over the entire GOODS field. In particular, we find diffuse overdensities at z ∼ 0.7, at z ∼ 1, at z ∼ 1.6 and at z ∼ 2.3. Some of these have already been partially described in literature (Gilli et al. 2003;Adami et al. 2005;Vanzella et al. 2005;Trevese et al. 2007;Díaz-Sánchez et al. 2007;Castellano et al. 2007). Fig. 1 shows the position of these overdensities over the photometric redshift distributions of our sample.These overdensities are also traced by the distribution of the spectroscopically confirmed AGNs in our catalogue, as shown in the lower panel of Fig. 1 (these objects are not included in the sample used for the density estimation). This link between large scale structures and AGN distribution was already noted, at lower redshift, in the CDFS (Gilli et al. 2003), in the E-CDFS (Silverman et al. 2008) and in the CDFN (Barger et al. 2003). Within these large scale overdensities, we identify the structures, with the procedure described in Sect. 4. Using an analysis with a 5σ threshold, we find that two structures identified with ρ >ρ + 4σ, at z ∼ 0.7 and z ∼ 1, are the sum of two different structures, so we used a 5σ threshold to separate these peaks. We then associate the galaxies belonging to the region of overlap between the two structures to the less distant peak.
Overall, we find four structures at z ∼ 0.7, four structures at z ∼ 1, one at z ∼ 1.6 (see also C07) and three structures at z ∼ 2.3. The density isosurfaces of the structures at z ∼ 0.7, at z ∼ 0.96, z ∼ 1.05 and at z ∼ 2.3 are shown in Fig. 2, superimposed on the ACS z 850 band image of the GOODS-South. The analogous image for the overdensity at z ∼ 1.6 is showed in Castellano et al. (2007). In the figure, we indicate with a cross the peak position of the identified structures. Other overdensities present did not pass our selection criteria described in Sect. 4. Table 4.
All the structures are presented in Table 4, where we list the following properties: Column 1: ID number. Column 2-4: The position of the density peak (redshift, RA and DEC) obtained with our 3-D photometric analysis. Column 5: The number of the objects associated with each structure as defined above. This number gives a hint on the richness of the structure; however it should not be used to compare structures at different redshifts because of the different magnitude intervals sampled. Column 6: The average number of field objects present in a volume equal to that associated to the structure, at the relevant redshift. We calculated this number by integrating the evolutive LFs obtained by Salimbeni et al. (2008). In particular, we integrated the LF up to an absolute limiting magnitude calculated using the average K-and evolutionary corrections and z 850 limiting observed magnitude as done in Sect. 4. In this way we take into account the selection effects given by the magnitude cut in our catalogue, as a function of redshift. Column 7-8: The M 200 and r 200 (assuming bias factors 1 and 2). The mass M 200 is defined as the mass inside the radius corresponding to a density contrast δ m = δ gal /b ∼ 200 (Carlberg et al. 1997), where b is the bias factor. To estimate the 3D galaxy density contrast δ gal we count the objects in the photometric redshift range occupied by the structure as a function of the clustercentric radius. We then perform a statistical subtraction of the background/foreground field galaxies, using an area at least 2.5 Mpc (comoving) away from the center of every cluster in the relevant redshift interval. Finally, the density contrast is computed assuming spherical symmetry of the structure. The mass inside a volume V of density contrast δ gal is determined adapting to
Fig. 2.
Density isosurfaces for structures at z ∼ 0.7 (a), at z ∼ 0.95 (b), z ∼ 1.05 (c) and at z ∼ 2.3 (d) (average, average +2σ, average +3σ to average +10σ) superimposed on the ACS z 850 band image of the GOODS-South field. Yellow crosses indicate the density peak of each structure, the number is the ID of the structure in Table4. For the analogous picture regarding the cluster ID=9 at z ∼ 1.6 see Castellano et al. (2007). Other overdensities present did not pass our selection criteria described in Sect. 4. our case the method used for spectroscopic data at higher z by Steidel et al. (1998): in whichρ u is the average density of the Universe and δ m is the total mass density contrast related to the galaxy number density contrast through a bias factor: 1 + δ m = 1 + δ gal /b. We assume a bias factor b in the range 1 ≤ b ≤ 2 (see Arnouts et al. 1999).
Column 9: The level of the density peak, measured in number of σ above the average volume density. We then searched the available spectroscopic public data (Wolf et al. 2001;Le Fèvre et al. 2004;Szokoly et al. 2004;Mignoli et al. 2005;Vanzella et al. 2005Vanzella et al. , 2006Vanzella et al. , 2008 to check if any of the members of the structures have spectroscopic redshifts, to estimate their location and the velocity dispersion when the statistics is sufficient. Spectroscopic galaxies are considered members of a structure if their redshift was within 4500km/s (i.e. three times the velocity dispersion of a rich cluster) from the mode of its redshift distribution. From these data, we estimated the average redshift of the structure and the velocity dispersion using the biweight estimators, computed using the Rostat package (Beers et al. 1990), with 68% confidence uncertainties obtained from a Jackknife analysis.
In Table 5 we present a value for the X-ray count rate in the band 0.3-4 kev, the corresponding flux (in the interval 0.5-2 keV) and the rest-frame luminosity (0.1-2.4 keV), from the Chandra 2Ms exposure (Luo et al. 2008). We measure the count rates in a square of side of ∼ 30 ′′ , centred on the position of the peak of each structure. For the count-rate to flux conversion we assume as spectrum a Raymond-Smith model (Raymond & Smith 1977) with T=1 keV and 3 keV and metallicity of 0.2 Z ⊙ . a -Values for a Raymond-Smith model with assumed temperature respectevelly of 3 keV and 1 keV and metallicity 0.2 Z ⊙ . b -u.l. indicates structures with a 3 σ upper limit in the flux.
Structures at z ∼ 0.7
At redshift z ∼ 0.67 we isolate three high density peaks (ID=1,2 and 3) that are part of a large scale structure already noted, as a whole, by Gilli et al. (2003).
For the structure with ID=1, we estimate the redshift from the available 6 spectroscopic data. We find an average redshift of 0.665 ± 0.001 and a velocity dispersion of 446 ± 180 km s −1 . Assuming that the cluster is virialised, we estimate r vir = 0.8M pc and M vir = 1.0 · 10 14 M ⊙ , using the relations in Girardi et al. (1998b). This estimate is based also on the assumption that there are no infalling galaxies and that the surface term (e. g. Carlberg et al. 1996) is negligible. Considering the uncertainties, also due to the small number of spectroscopic galaxies, M vir is fairly consistent with the M 200 estimated from the galaxy density contrast (0.9 − 3 · 10 14 M ⊙ ).
We also derive the upper limits on the X-ray luminosities for this structure, that is of the order of 0.2 − 0.3 · 10 43 erg s −1 . All the properties presented are consistent with the structure being a galaxy group/small cluster (Bahcall 1999).
The structures with ID=2, 3 have upper limits on their Xray luminosities of the order of 0.2 − 0.3 · 10 43 erg s −1 , and their masses are of the order of M 200 ∼ 0.2−0.5·10 14 M ⊙ . These X-ray luminosities and masses are all typical of galaxy groups/small clusters (Bahcall 1999). Each of these structures contains a spectroscopically confirmed galaxy detected in the VLA 1.4 GHz survey (Miller et al. 2008).
At a slightly higher redshift (z ∼ 0.7) we identify a high density peak (ID = 4) embedded in another large scale structure which was already known in literature (Gilli et al. 2003;Adami et al. 2005;Trevese et al. 2007). In our previous paper we identify this structure applying our algorithm to the data from the K20 catalogue, and classified it as an Abell 0 cluster.
In this new analysis we find that this structure is symmetric and has a regular mass profile. It has 92 associated objects (M B (AB) < −18), and two AGNs. From the density contrast we obtain an r 200 = 1.7 − 2.4M pc and a total mass of M 200 = 0.9 − 3.0 · 10 14 M ⊙ for bias factor b=2-1. From the 36 galaxies with spectroscopic redshifts, we estimate a redshift location of 0.734 ± 0.001 and a velocity dispersion of 634 ± 107km s −1 . We derive a virial radius r vir = 1.3M pc, and a virial mass M vir = 3.2 · 10 14 M ⊙ , in good agreement with M 200 . The 3 sigma upper limit for the X-ray luminosity in the interval 0.1-2.4 keV is very low (L X = 0.19 − 0.44 · 10 43 erg s −1 ). Note that the area we considered does not include the X-ray source 173 of Luo et al. (2008), that, similarly to Gilli et al. (2003), we associate to the halo of the brightest cluster galaxy (ID GOODS −MUS IC =9792). Alternatively, Adami et al. (2005) associated the bolometric luminosity (L X = 0.11 · 10 43 erg s −1 ) of the X-ray source 173 to the thermal emission of the intra-cluster medium (ICM). From this value they deduced a galaxy velocity dispersion around 200 − 300 km s −1 . This value is apparently in contrast with the σ v estimated from the spectroscopic redshifts. We also associate to the galaxy ID GOODS −MUS IC =9792 the object 236 detected in the VLA 1.4 GHz survey. It has an integrated emission of 517.5 ± 13.1µ Jy (Miller et al. 2008).
From this analysis we can conclude that our two independent mass estimates (M 200 and M vir ) are consistent with this structure being a virialised poor cluster. However, the X-ray emission is significantly lower than what is expected from its optical properties, as it shows from the comparison in Fig. 3 with the M 200 -L X relations found by Reiprich & Böhringer (2002) and by Rykoff et al. (2008).
Structures at z ∼ 1
At redshift ∼ 1 we find four structures (ID= 5, 6, 7 and 8). The horizontal error bar is calculated considering a bias factor in the range 1 ≤ b ≤ 2, while the vertical error bars are computed varying the gas temperature between T=1 keV and T=3 keV as discussed in the main body. The clusters at z ∼ 0.7 and z ∼ 1.6 are indicated by red points and error bars. The M 200 − L X relations found by Reiprich & Böhringer (2002) and by Rykoff et al. (2008) are indicated by a black and green line respectively.
The structure with ID=5 at z ∼ 0.96 has 32 member galaxies. This structure can be associated to the extended X-ray source number 183 in the catalogue by Luo et al. (2008) derived from the 2MS Chandra observation. This extended X-ray source had not been associated to any structure so far. From the count rate in the interval 0.3-4 keV (S/N=11.3) we estimate a luminosity L X = 0.86 − 2.36 · 10 43 erg s −1 (in the interval 0.1-2.4 keV). For the structures with ID=6, 7 we estimate r 200 ∼ 1.2 − 1.8M pc, and a total mass of M 200 = 0.4−1.1·10 14 M ⊙ . The 3 sigma upper limits for their X-ray luminosity are all slightly below 10 43 erg s −1 , consistent with their M 200 masses.
The structure with ID=8 at z ∼ 1.06, has 38 associated galaxies, and an AGN spectroscopically confirmed. We derive a precise redshift location of z = 1.0974 ± 0.0015 and a velocity dispersion of 446±143kmsec −1 , from 6 galaxies with spectroscopic redshift. From these galaxies we also obtain M vir = 0.8 · 10 14 M ⊙ and r vir = 0.8M pc. We estimate r 200 = 1.1 − 1.3M pc, and M 200 = 0.2 − 0.5 · 10 14 M ⊙ , which are compatible values with a group of such M vir and r vir . This structure was already found with different methods by Adami et al. (2005), using a friendof-friend algorithm on spectroscopic data from the VIMOS VLT survey (structure 15 in their Table 4), and by Díaz-Sánchez et al. (2007) studying the extremely red objects on GOODS-South (they call this structure GCL J0332.2-2752). Their redshift positions and the velocity dispersions are consistent with those obtained in the present analysis. The 3 sigma upper limit for the X-ray luminosity is around 10 43 erg s −1 , consistently with the estimated M 200 mass.
Considering their properties, these four structures can be classified as groups of galaxies. Consistent results for the structure with ID=6 were obtained in Trevese et al. (2007).
Structures at high z
At redshift z ∼ 1.6, we find a compact structure that corresponds to a forming cluster, as already discussed in detail by C07 (see also Kurk et al. 2008). We find a regular mass profile for this structure, and we estimate an r 200 = 2.1 − 2.9M pc, and a M 200 = 2.0 − 4.9 · 10 14 M ⊙ . This structure has 50 members, including 3 spectroscopic redshifts, and a confirmed AGN, from the GOODS-MUSIC catalogue. We add three other spectroscopic redshifts from the GMASS sample . From these 6 redshifts we estimated a velocity dispersion of 482 ± 217km/s, and derived an M vir = 1.4 · 10 14 M ⊙ and r vir = 1.46M pc. This estimate is consistent with the value in Table 4. We derive an upper limit to the X-ray luminosity of 0.83 − 3.67 · 10 43 erg s −1 (0.1-2.4 KeV), lower than expected from the velocity dispersion and the estimated M 200 (see Fig. 3).
At z ∼ 2.2 we find a diffuse overdensity, similar to those at lower redshift, embedding three structures. We associate to these structures 20, 23 and 19 galaxies. We estimate for all these structure an r 200 ∼ 1.3 − 2M pc and a mass of M 200 ∼ 0.6 − 1.6 · 10 14 M ⊙ . These structures appear to be comparable to those at ∼ 0.7 and ∼ 1.6, and they could be forming clusters.
Colour-Magnitude diagrams
We study the colour magnitude diagrams (U − B vs. M B ) for all the structures, as shown in Figs. 4 and 5. To estimate the slope of the red-sequence, we define its members as passively evolving galaxies according to the physical criterion age/τ ≥ 4, where the age and τ (the star formation e-folding time) are inferred for each galaxy from the SED fitting (Sect. 2). This quantity is, in practice, the inverse of the Scalo parameter (Scalo 1986) and a ratio of 4 is chosen to distinguish galaxies having prevalently evolved stellar populations from galaxies with recent episodes of star formation. Indeed, an age/τ = 4 corresponds to a residual 2% of the initial SFR, for an exponential star formation history, as adopted in this paper. Grazian et al. (2006b) showed that this value can be used to effectively separate star forming galaxies from the passively evolving population (see Grazian et al. 2006b, also for the discussion on the uncertainty associated to this parameter). Passively evolving galaxies are indicated in figures as filled squares. Fig. 4 shows the colour magnitude diagrams for the four structures between z = 0.66 and z = 0.71. The cluster at z ∼ 0.71 (Panel d) shows a well defined red sequence, while the three structures at z ∼ 0.66 have fewer passively evolving galaxies. Therefore, in order to increase our statistics, we estimate the colour-magnitude slope combining all the four structures in the interval 0.66 < z < 0.71 (see Panel a in Fig. 5). We obtain a value −0.023 ± 0.006 for the slope. The resulting colour-magnitude relation is plotted in all panels in Fig. 4 and in the panel a in Fig. 5 as a continuous line. The dotted lines constrain the error at 1sigma obtained with a Jackknife analysis. It is possible to see in Fig. 4 that this average colour magnitude relation is roughly consistent with the position in the (U-B) vs B diagram of the galaxies belonging to each single structure. We therefore apply the same method at higher redshift, i. e. we estimate the slope of the red sequence by combining the different structures at the same redshift. for each structure at z ∼ 0.7. Squares indicate passively evolving galaxies selected as age/τ ≥ 4, and the circles are galaxies with age/τ < 4. Filled points indicate galaxies with spectroscopic redshift. The continuous lines are the fit to the red sequence of all the combined structures. The dotted lines are the uncertainties at 1-sigma obtained with a Jackknife analysis. Fig. 5, panel b, shows the colour magnitude diagram for the structures at z ∼ 1. We find a slope of −0.03 ± 0.01.
Panel c in Fig. 5 shows the colour magnitude diagram for the structure at z ∼ 1.6. In this case we have galaxies distributed on less than a magnitude range, that is insufficient to estimate the slope of the "red sequence". However, if we plot the two sequences obtained at lower redshift, we can see that the few passively evolving galaxies are consistent with them.
Finally, at redshift ∼ 2, we have only 4 passive objects from the combination of 3 structures and there is no evidence of a well defined red sequence. We note that the colours of these objects are generally bluer in comparison to the colour of the relations found at lower redshifts.
The values of the slopes of the structures at redshift ∼ 0.7 and ∼ 1 are consistent with those of previous determinations (e.g. Blakeslee et al. 2003;Homeier et al. 2006;Trevese et al. 2007). We confirm that the observations indicate no evolution up to redshift ∼ 1. This would imply that the mass-metallicity relation that produces the red sequence (Kodama et al. 1998) remains practically constant up to, at least, z ∼ 1.
Galaxy properties as a function of the environment
To each object in the sample we associate the comoving density at its position, and we study galaxy properties as a continuous function of the environmental density.
Galaxy populations: bimodality
We study the variation of the fraction of red and blue galaxies as a function of the environmental density. To separate red and blue galaxies we use the minimum in the bimodal galaxy distribution in the (U-V) vs. B colour magnitude diagram, derived by Salimbeni et al. (2008). Fig. 6 shows the fraction of red and blue galaxies for different rest frame B magnitudes in four redshift intervals. In general, for every environment, we find that, at fixed luminosity, the red fraction increases with decreasing redshift, and, at fixed redshift, it increases at increasing B luminosity. We also find that for z < 1.2 the red fraction increases with density for every luminosity, while this effect is absent at higher redshift. Our results extend to higher redshift those obtained by Cucciati et al. (2006) on the VVDS survey, with a shallower spectroscopic sample that reaches z ∼ 1.5. We find that at z > 1.2 even the highest luminosity galaxies are blue, star forming objects, similarly to the results in Cucciati et al. (2006), although our colour selection is slightly different, since we select in colour Fig. 6. Fraction of red (filled circles) and blue galaxies (filled triangles) at decreasing rest frame B magnitudes (from top to bottom) in four contiguous intervals of increasing redshift (from left to right). Vertical errorbars indicate the poissonian uncertainty in each bin. The shaded areas are obtained by smoothing the red (blue) fraction with an adaptive sliding box. The horizontal errorbars indicate the range of density covered by the 5-95 % of the total sample. two complementary samples, while they select two extreme red and blue populations ((u * − g ′ ) ≥ 1.1 and (u * − g ′ ) ≤ 0.55)). Our results are also in agreement with the analysis of the DEEP2 survey by Cooper et al. (2007) in the redshift range 0.4 < z < 1.35. They found a weak correlation between red fraction and density at z ∼ 1.2. We see that at z > 1.2 this correlation disappears, indicating that the change probably occur in the critical range 1.5 < z < 2.0, at least in the environments probed by our sample. However we note that, given the relatively small area covered, we do not probe very high density regions (i.e. rich clusters), at variance with wide, low redshift surveys. When rich clusters are considered (e.g. Balogh et al. 2004), a stronger variation with environment in the colours of faint galaxies is seen. In any case, the disappearance at z > 1.2 of the variation of the red fraction in the density range probed by our sample, is an indication that a relevant change in galaxy properties takes place at z ∼ 1.5 − 2.
Galaxy physical properties in high and low density environments
We then study the distribution of physical parameters and photometric properties for galaxies in high density environments, and compare it to field galaxies. The first sample is defined as the combination of the data from structures with similar redshifts ('group galaxies' hereafter). The field galaxies are defined as those with an associated ρ lower than the median density (0.0126 for z < 1.8 and 0.0085 for z > 1.8) of the entire sample ('field galaxies' hereafter). We quantify the differences in the distributions of the galaxy physical properties, i.e. mass, age, star formation rate, through the probability P KS of the two samples, obtained as described above, using a Kolmgorov-Smirnov test. We reject the hypothesis that two samples are drawn from the same distribution if P KS < 5 · 10 −2 . Fig. 7 shows the distribution of the galaxy total stellar mass in high and low density regions, in the same four contiguous redshift intervals used before. The galaxies in high density environment have a distribution that generally peaks at higher masses with respect to "field" galaxies. For the mass distribution we find a significant difference in all but the last redshift bin as shown from the P KS . It is important to remark here that the shape of the distributions at low masses could depend on the luminosity selection. In fact, a magnitude-limited sample does not have a well defined limit in stellar mass. This effect depends on the range of M/L ratio spanned by galaxies with different colours, e. g. as shown in Fontana et al. (2006) in our sample, at z ∼ 1, M/L K extends from 0.9, for redder objects, to 0.046, for bluer objects. If a colour segregation is present as a function of the environment, it could bias the distribution favouring the observation of lower mass galaxies in less dense regions, where the fraction of blue galaxies is higher. Although, as shown in Fig. 6, we do not find a strong colour segregation, especially at z > 1, we carry out here also a more conservative analysis. We consider only the range of masses above the completeness mass limit obtained from the maximal M/L z 850 for a passive evolving system (log(M) > 9.0 at z ∼ 0.6, log(M) > 9.6 at z ∼ 1, log(M) > 10.5 at z ∼ 1.6 and log(M) > 11.1 at z ∼ 2.15). Considering galaxies above these mass limits we find that the masses of "group" galaxies are still higher than those of "field" galaxies, for the lower bin in redshift (P KS = 9.7 · 10 −4 ). At z > 1.2, however, it is not possible to give a conclusive result due to the low statistic caused by this mass cut. Fig. 7. Galaxy stellar mass distribution in four redshift intervals. Shaded red histograms represent galaxies associated with the density peaks and empty black histograms represent galaxies in the low density regions, as described in the text. In each panel the average value of log(M) for the two distributions are indicated by arrows of the same colour. The K-S probability is reported in each panel. Vertical lines indicate the mass limit at the median redshift of the bin (log(M) = 9.0 at z ∼ 0.6, log(M) = 9.6 at z ∼ 1, log(M) = 10.5 at z ∼ 1.6 and log(M) = 11.1 at z ∼ 2.15).
Analogous results are found from the analysis of the luminosity distribution of "field" and "group" galaxies. In particular, we find that the distribution of galaxies in "groups" have on average brighter M I rest-frame magnitudes at all redshifts. The results are similar also for the other rest frame bands, implying that galaxies in high density environments have, on average, greater bolometric luminosity with respect to field galaxies.
Finally, we study the age and SFR distributions for "group" and "field" galaxies (see Fig. 8). Only at low redshift there appears to be a significant difference (respectively P KS = 3.0 · 10 −2 and P KS = 6.7 · 10 −3 , see Fig. 8). The two age distributions show a similar shape for young galaxies, but "group" galaxies have a higher fraction of old galaxies. As also shown by the difference in the average ages for the two samples, the "group" galaxies are older than the field ones. At higher redshifts the two distributions do not show significative differences. Indeed, at higher redshifts, any possible difference in the age of the two galaxy populations is probably smaller than the uncertainty on the ages. Analogously, star forming galaxies have a similar distribution for "group" and "field" samples, but the "group" sample has a higher faction of galaxies with low star formation as it is also shown by the different values of the average SFR. Fig. 7 but for the ages of galaxies. The average value of the age for the two distributions are indicated by arrows. Bottom: As in Fig. 7 but for SFR of galaxies. The average values of the log(S FR) for the two distributions are indicated by arrows.
Summary and Conclusions
We applied a (2+1)D algorithm on the GOODS-MUSIC catalogue to identify structures in this area. This algorithm combines galaxy angular positions and precise photometric redshifts to give an adaptive estimate of the 3D density field effectively also at z > 1, and in a wide area. In this way we obtained a density map from redshift 0.4 up to 2.5, and we isolated the higher density regions. To identify density peaks we chose a conservative selection criteria (at least five galaxies in connected regions of ρ >ρ + 4σ) in order to maximise the purity of our sample.
We built mock catalogues simulating the GOODS-South field. Applying our density thresholds and selection criteria on these catalogues, we found a purity near 100% (with less than 15-20% of lost structures) up to redshift 1.8, and ∼ 75-80% at higher redshift. In the higher redshift range, the criterium is very conservative, to keep a low number of false detections, therefore the completeness is low (< 40%). From the simulations we also evaluated the ability of the algorithm in separating real structures that are very close both in redshift and angular position. Both at low and high redshift it is not possible to separate structures whose centres are closer than 1.0M pc on the plane of the sky and 2σ z in redshift. For larger separations it is possible to distinguish the groups, but using higher thresholds (5 or 6 σ above the average).
We found large scale overdensities at different redshifts (∼ 0.6, ∼ 1, ∼ 1.61 and ∼ 2.2 ), which are well traced by the AGN distribution, suggesting that the environment on large scales (∼ 10M pc) has an influence on AGN evolution (Silverman et al. 2008). We isolated several groups and small clusters embedded in these large scale structures. Most of the structures at z ∼ 0.7 and ∼ 1 have properties of groups of galaxies: their masses are of the order of M 200 = 0.2 − 0.8 · 10 14 M ⊙ , and their X-ray luminosities are slightly below 10 43 erg s −1 , consistent with the expectations of the M 200 -L X relations. The structure at z = 0.71, and those at z > 1.6 seem to be more massive, and in particular the structures with ID=4, and 9 can be classified as poor clusters. It is interesting to note that both these structures are significantly X-ray underluminous, as it is evident by a comparison with the M 200 -L X relations found by Reiprich & Böhringer (2002) and by Rykoff et al. (2008) (Fig. 3). This is not surprising since several authors have observed that optically selected structures have an X-ray emission lower than what is expected from the observations of X-ray selected groups and clusters: this effect has been observed at low redshift both in small groups (Rasmussen et al. 2006) and in Abell clusters (Popesso et al. 2007) and in clusters at 0.6 < z < 1.1 (Lubin et al. 2004). These results may be explained by the fact that such optically selected structures are still in the process of formation or the result of the alignment of two substructures along the line of sight, although it cannot be excluded that they contain less intracluster gas than expected, because of the effect of strong galactic feedback (Rasmussen et al. 2006). If these structures are virialised, as probable in the case of the massive structure at z = 0.71 (ID=4), this may be an indication that they contain less intracluster gas than expected. It is worth investigating this issue in future deep surveys, since it would have interesting implications on the evolution of the baryonic content of these structures.
We then studied the colour magnitude diagrams (U − B vs M B ) for all the structures. We defined the members of the redsequence according to the physical criterion age/τ ≥ 4 which should select passively evolving galaxies with little residual star formation. We confirmed no evolution of the red sequence slope up to redshift ∼ 1. This implies that the mass-metallicity relation that produces the slope of the red sequence remains constant up to z ∼ 1.
We then studied the variation of the fraction of red and blue galaxies as a function of the environmental density. We found that, at fixed redshift, the red fraction increases at increasing B luminosity, while, at fixed luminosity, it increases with decreasing redshift. We found that the increment of the red fraction at growing density disappears at z > 1.2.
We also studied galaxy properties in different environments. We found that the galaxies in high density environments have higher masses with respect to "field galaxies", in qualitatively agreement with a downsizing scenario. The mass distributions show a significant difference in all but the last redshift bin.
Similarly, the galaxies in groups have on average brighter restframe magnitudes and there is a greater number of bright galaxies in groups at all redshifts compared to field galaxies. Finally, the age and SFR distributions for the two subsamples appear different only at low redshifts where "group galaxies" are generally older and less star forming than"field" ones.
From the analysis of the environmental dependence of galaxy colours and mass as a function of redshift, and from the absence of any well defined red sequence at high redshift, we can argue that a critical period in which some basic characteristics of galaxy populations are established is that between z ∼ 1.5 and z ∼ 2. | 2009-03-23T20:20:29.000Z | 2009-03-23T00:00:00.000 | {
"year": 2009,
"sha1": "9d59d5159fbb85def952dd0b28da85900fd777a5",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2009/27/aa11570-08.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8ec288fc828273044d4e91cda6b281d080223672",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
90552330 | pes2o/s2orc | v3-fos-license | Comparative analyses on medium optimization using one-factor-at-a-time, response surface methodology, and artificial neural network for lysine–methionine biosynthesis by Pediococcus pentosaceus RF-1
ABSTRACT Optimization strategy that encompassed one-factor-at-a-time (OFAT), response surface methodology (RSM), and artificial neural network method was implemented during medium formulation with specific aim for lysine-methionine biosynthesis employing a newly isolated strain of Pediococcus pentosaceus RF-1. OFAT technique was used in the preliminary screening of factors (molasses, nitrogen sources, fish meal, glutamic acid and initial medium pH) before proceeded to optimization study. Implementation of central composite design of experiment subsequently generated 30 experimental runs based on four factors (molasses, fish meal, glutamic acid, and initial medium pH). From RSM analysis, a quadratic polynomial model can be devoted to describing the relationship between various medium components and responses. It also suggested that using molasses (9.86 g/L), fish meal (10.06 g/L), glutamic acid (0.91 g/L), and initial medium pH (5.30) would enhance the biosynthesis of lysine (15.77 g/L) and methionine (4.21 g/L). Alternatively, a three-layer neural network topography at 4-5-2 predicted a further improvement in the biosynthesis of lysine (16.52 g/L) and methionine (4.53 g/L) by using formulation composed of molasses (10.02 g/L), fish meal (18.00 g/L), and glutamic acid (1.17 g/L) with initial medium pH (4.26), respectively.
Introduction
Amino acids are widely applied in food, pharmaceutical, medicine, and chemical industries [1]. They are most commonly utilized as nutritional supplement or as additive in animal feeds. Amino acids are very important element in many metabolic activities. As such, their bioprocessing plays a major role in improving the efficacy of animal protein production contributing to the increase in protein supply [2][3][4]. Dietary provision for protein inevitably relates to the requirement of amino acids, since they are the building blocks of protein whilst some being the products of protein hydrolysis as well [5][6][7].
For industrial microbial cultivation process, medium composition plays a critical role due to its major influences on the formation, concentration, and yield of a particular culture's end product [8,9]. As such, optimization of medium is usually a major concern when one is tasked with maximizing the profit [10,11]. For each bioproduct, the process facility and suitable strategies have to be elaborated by a comprehensive and detailed process characterization, which in turn will determine the most relevant process parameters influencing the final yield or productivity affecting the overall process economics [11].
The traditional one-factor-at-a-time (OFAT) approach for optimization exercise can be time-consuming. Nonetheless, it can serve the purpose of coarse estimation of the optimum levels [12][13][14]. On the other hand, statistical method such as response surface methodology (RSM) enables researchers to design the experiments and evaluating the interactions among factors and responses throughout the study. More researches in recent time have used this approach that combines experimental design, regression modeling techniques, and optimization tool to predict the maximum yield for bioproducts of interest [15][16][17]. CONTACT Artificial neural network (ANN) is another means of analysing experimental outputs that is fundamentally different from RSM. Nowadays, the use of ANN in the field of predictive microbiology has inspired several studies [8,18,19]. The attractiveness of ANN as empirical modelling schemes lies in their ability to extract, with high accuracy and irrespective of the degree of nonlinearity existing between system variables, the intrinsic relationships between independent and dependent variables through training of the network on a set of examples representing the phenomenon to be modelled [20]. In other words, ANN is a highly simplified model mimicking the structure of a biological network. A set of biological neurons receive inputs, combines them, presents it as a nonlinear operation on the result, and then output the final result [21,22].
Numerous studies were reported in the literature whereby models were derived based on the RSM and ANN analyses of data-set gained from the same experimental design. ANN and RSM models were then compared for their predictive capacity. Several researchers had reported the combined ANN and RSM model development in various bioprocessing optimization studies [15,16,[23][24][25]. While the use of ANN or RSM optimization method for lactic acid bacteria (LAB) cultivation and protein biosynthesis has been reported [19,25,26]; this is the first such report on lysine-methionine biosynthesis by Pediococcus pentosaceus using these two approaches. The main objective of this study is to optimize the medium formulation and initial medium pH on lysine-methionine biosynthesis by P. pentosaceus RF-1 in accordance to the traditional OFAT and statistical approaches of RSM and ANN.
Materials and methods
Bacterial strain and maintenance P. pentosaceus RF-1, which is a facultative anaerobe strain, was used and maintained in 5% (v/v) glycerol at ¡80 C. The strain was routinely grown in de Man Rogosa Sharpe (MRS) medium. P. pentosaceus RF-1 was locally isolated from fermented milk and has been characterized by full-length 16S rRNA gene sequencing. Phylogenetic analysis had disclosed a taxonomic position of the strain to be closely related to P. pentosaceus ATCC 25 745 with 99% of similarity. The strain was deposited in the Microbial Culture Collection Unit (UNICC), Institute of Bioscience, Universiti Putra Malaysia, under the accession number of UPMC1087 [27].
Molasses would undergo pre-treatment via dilution with distilled water containing 2% (v/v) sodium dihydrogen phosphate in the ratio of 1:1 and autoclaved [28]. The cultivation medium was set to pH 7 with the addition of 1 M NaOH or 1 M HCl. before sterilization at 121 C for 20 min. After sterilization, the medium was left to cool at room temperature and then added with CaCO 3 , molasses, and 10% (v/v) of inoculum. The inocula were prepared by inoculating a colony of the strain grown on MRS agar plate into 5 mL of MRS broth in a 100-mL test tube with continuous shaking (100 rpm) on a rotary incubator shaker at 37 C for 12 h. The submerged batch cultivations for the growth of P. pentosaceus RF-1 were carried out using 250-mL shake flask filled with 150 mL of medium for 18 h. About 3 mL samples were withdrawn at time intervals during the cultivation for the analyses of cell concentration, glucose consumption, and lysine-methionine concentration.
Design of experiment (DOE)
One-factor-at-a-time The experimental factors and their associated levels for medium formulation and pH adjustment via OFAT experiments are shown in Table 1. All experiments were performed in triplicates, and the results were reported as the mean of these replication.
Central composite design (CCD)
Four variables and five levels were used in this study. The four variables used were molasses, fish meals, glutamic acid, and initial medium pH ( Table 2).
Response surface methodology (RSM) modelling
The results from CCD were then statistically evaluated by Design Expert 6.0.6 software (Stat-Ease Inc.). Table 1. Parameters and variables used in OFAT approach.
Parameter
Variables and concentrations (g/L) Molasses 1, 3,5,10, and 12 g/L Nitrogen source Yeast extract, peptone, palm kernel cake, and fish meal Fish meal 1, 3, 5, 10, 15, and 20 g/L Glutamic acid 0.1, 0.3, 0.5, 1.0, and 5.0 g/L Initial pH pH 5, 6, 7, and 8 Independent variables were allocated a high level (+1) and low level (-1). An axial distance (+a) of 1.6 was chosen to make the design rotatable. Central point was denoted as (0) and maintained at a constant value, as it is providing an unbiased estimate of the process error variance. The centre point was set as mid-point value, and there were six centre points for these particular experiments. The best suited regression model in terms of the two responses of interest, i.e. lysine and methionine concentration was found to be the quadratic model resembling second-order polynomial as per the following equation: where Y is the lysine and methionine concentration, j is the index numbers of the pattern, x j is the coded variables, b j, b jj , and b jk were linear, quadratic, and interactive coefficient. The F-value was considered to be significant. The lack of fit (LOF) was registered as nonsignificant and regression produced a good multiple correlation coefficients (R 2 ).
Artificial neural network (ANN) modelling
The same set of CCD group of experimental data, which had been used for the RSM design, was also employed in developing the ANN. Versatile ANN software (Neural Power, Ver. 2.5, CPC-X Software, USA) was chosen to recognize the relevant pattern or regression from tabulated data. They were further divided into two sets to serve for training (30 data) and testing (4 data) purposes. From Table 3, the bold number represents the randomly selected testing set. Every network developed would depict four input variables and two output responses; each underwent training for computation of network parameters. The performance of network was consulted concurrently with the testing set during training to avoid 'over trained' [16,26]. Training a neural network model means selecting one model from the set of allowed models that minimizes the cost criterion. As to supervise the training, designed networks were trained to the point of exhibiting root-mean-square error (RMSE) which theoretically should be closest to 0.01 (Equation (2)). The networks correlation coefficient (R) (Equation (3)) and determination coefficient (DC) (Equation (4)), respectively, are closest or equal to 1: where N is the number of experiments, x obs is the observed value, x p is the predicted value obtained from ANN, x m is the average of actual values, and x pm is the average of predicted value. A multilayer full feed-forward neural network was used to predict the model of lysine-methionine biosynthesis. The software allows for 5-30 numbers of neuron selections per layer when designing a network, each with an increment of one neuron at a time. In this experiment, the search for the best topology was restricted to a network containing a single hidden layer. The optimal number of neurons in hidden layers as well as the suitable transfer functions chosen for hidden and output layers (sigmoid, hyperbolic tangent function, Gaussian, linear, threshold linear, and bipolar linear) were manually determined iteratively based on the ability of network to provide the most accurate prediction of the testing set and least minimum of cost function. Different learning algorithms were used to train the networks. However, a common default for selection is that of back-propagation learning algorithm, during training, a set of inputs is presented to a network of randomly pre-assigned weights. Each neuron in the hidden and output layers first calculates the weighted sum of its inputs and passes the result through a transfer function to produce an estimate of output that corresponds to the input data-set. The result was then compared to the corresponding desired values, and the error is backpropagated through the network to adjust the connection weights according to the learning instruction. This practice is reiterated until the predetermined target RMSE is reached [8].
Analytical methods
Cell and glucose concentrations A 1 mL of sample was centrifuged (10,000 x g, 10 min, 4 C) to separate the cell pellet from supernatant. The supernatants were collected for glucose determination [29] with absorbance measured at 540 nm, while cell pellets were used for cell concentration determination [12].
Amino acid concentration
Amino acids were determined preliminary to assured tendency of amino acids in the medium using the quantitative methods by high-performance liquid chromatography (HPLC). For the routine detection, quantitative analysis by acid ninhydrin methods [1,30] was used.
Results and discussion
One-factor-at-a-time (OFAT) Preliminary optimization of the medium formulation (molasses, nitrogen sources, and glutamic acid) and initial medium pH was conducted in shake-flask experiments ( Figure 1) whereby biosynthesis of lysine and methionine by P. pentosaceus RF-1 was demonstrated as a strain-dependent in the culture medium.
In order to achieve efficient and optimal production, various studies on medium composition had been dedicated to the selection of carbon and nitrogen sources. Molasses, often regarded as a waste from sugar factories was chosen as a prospect for the main carbon substrate in fermentation media. Effect of molasses concentration was carried out ranging from 1 to 12 g/L in the study ( Figure 1(A)). The best concentration was detected at 5 g/L when P. pentosaceus RF-1 produced high lysine and methionine at 6.68 and 3.29 g/L, respectively. Inhibition of cell growth and amino acids biosynthesis was observed when more than 10 g/L molasses was used in formulation.
Nitrogen is used both for functional and structural purposes by different microorganisms. The form of nitrogen has profound effect on the microbial metabolism [12]. Figure 1(B) evaluates the best candidates for nitrogen source among yeast extract, peptone, palm kernel cake (PKC), and fish meal. Maximum cell concentration of P. pentosaceus RF-1 was observed in medium adopting PKC due to its high nitrogenous compounds and protein value. Nonetheless, fish meal turned out at the top in affecting biosynthesis of lysine (6.67 g/L) and methionine (3.13 g/L). The concentration of fish meal was then varied from 1 to 20 g/L in the following experiments (Figure 1(C)). Highest biosynthesis of lysine (6.84 g/L) and methionine (3.01 g/L) was detected when using fish meal at 5 g/L. It is worth noted that molasses when coupled with fish meal in the cultivation medium are particularly effective to provide tremendous microbial growth for further exploitation in producing various metabolites, biopolymers, and enzymes [31,32].
Protein-rich fish meal as the sole source of nitrogen depends upon the available level of the limiting amino acid. In this study, glutamic acid addition ranges from 0.1 to 5 g/L. Figure 1(D) shows that the best concentration of glutamic acid for lysine biosynthesis (6.86 g/L) was detected at 0.3 g/L, whereby it was slightly higher at 0.5 g/L for methionine biosynthesis (2.38 g/L). The addition of glutamic acid is essential due to this substrate occupying in between energy and protein metabolism, and very crucial in cases of metabolic for amino acid mechanism [33,34].
Optimal pH range for the growth of all genus Pediococcus is between 6.0 and 6.5 [35,36]. Effect of initial medium pH experiment was carried out ranging from pH 4 to pH 8. The final result obtained is in agreement with others [36], whereby pH 7 was found to be most suitable for amino acids secretion by microorganism (Figure 1(E)).
RSM modelling
Findings from the preliminary screening in OFAT experiment were then applied to RSM modelling. Four factors to be optimized were molasses (A), fish meal (B), glutamic acid (C), and initial pH (D), in which they were assigned to a number of runs as determined through CCD. A total of 30 experiments (Table 3) were conducted to evaluate their effect towards two responses (lysine and methionine synthesis). Results from CCD indicate that both produce optimal responses in experiment no. 16. The corresponding concentrations for factors A, B, C, and D are 10, 15, 1 g/L, and pH of 8, leading to the highest lysine and methionine produced at 14.13 and 4.96 g/L, respectively.
RSM simulation predicts that quadratic model was most suited to describe the relationship between factors and responses. Regression was performed to fit the response function with experimental data, resulting in two full actual models as per the following equations: where lysine and methionine represent the predicted responses; A, B, C, and D are coded values of molasses, fish meal, glutamic acid, and initial medium pH, respectively. The statistical analysis for significances of all factors was described by analysis of variance (ANOVA) in Table 4. The determinations of R 2 coefficient, correlation, and model significance (F-value) were used to analyse the adequacy of the model. The quality of fit of the equation was expressed by the DC, R 2 . A good R 2 should be 80% and above [12]. Based on the results, R 2 obtained for lysine and methionine are 0.9016 and 0.9039, respectively. These indicated that the models could explain about 90% of the variability, and it was attributed to the independent variables. It has been denoted that good of fit was determined by R 2 Adj. The R 2 Adj accurately shows that the extraneous factor terms in a derived model equation will affect in some reduction in the calculation of the error sum of squares [16]. In this study, R 2 Adj stands at 0.8097 (lysine) and 0.8143 (methionine), respectively, thus indicated an agreement of a good model among the obtained and predicted values for output responses.
Model significance (F-value) is a measure of variation of the data around the mean. The probability value (P model > F) of less than 0.05 implies that each of these models was considered significant, indicates that the present model can serve as a good prediction of the experimental results. Meanwhile, a result from the experiment was confirmed hence to be acceptable in good agreement relying on the value of the coefficient of variation (CV) that shows lysine at 14.69% and methionine 11.67%, respectively. From Table 2, centre points with a coded value (0) were repeated six times to estimate the pure error for the LOF tested. Models with a significant LOF term were not used for predictions, whereas insignificant LOF is the most desirable (p > 0.1). Both models produced LOF that are deemed not significant (P model > F) at 0.1369 and 0.4927, respectively.
From ANOVA analysis (Table 3), four independent variables were denoted to have a significant effect on P. pentosaceus RF-1 producing lysine and methionine. A p-value is used as a tool to determine the significance of each coefficient. Every parameter was estimated, and the corresponding p-values for lysine and methionine are shown in Table 4. Positive coefficient for A, B, and C was indicated as a linear effect on the response. Table 4 shows some of the model terms of responses of lysine (A, B, A2, and AC) and methionine (A, B, and A2) which had a p-value <0.05. Therefore, the simplified quadratic model equations (Equations (7) and (8)) appropriate in describing lysine and methionine biosynthesis are as follows: Lys ¼ þ9:74 þ 2:41A þ 0:57B À 1:60A2 þ 0:95AC (7) Met ¼ þ3:17 þ 0:77A þ 0:21B À 0:31A2 (8)
ANN modelling
Thirty experimental runs derived from CCD were also analysed through ANN, whereby the outcomes in terms of the observed, predicted, and absolute deviation of amino acid biosynthesis prediction made by the best neural network constructed are shown in Table 5. About 120 network architectures had been developed and tested for the prediction of lysine-methionine biosynthesis by P. pentosaceus RF-1. Following training and testing procedures, Table 6 describes the effect of different normal feed-forward network architectures on the model residual error, showing three examples of top network architectures that yielded quite a high accuracy compared to the rest. Nonetheless, only one network was selected as the best predictor based on the error reduction criterion.
Training a neural network entails selecting a learning algorithm that can minimize the error or cost criterion. Table 6 implies that ANN modelling of experimental data-set leading to the least residual error calculated was either trained using batch back propagation (BBP) or incremental back propagation (IBP). IBP is most adequate for the purpose of network training when both the training and, more importantly, the testing sets are able to return prediction values exhibiting the lowest RMSE and networks correlation coefficient (R) and DC closest to 1.0. The details of the learning algorithm have been reported elsewhere [15,18,37].
As for the best network design in terms of accuracy of which it is assigned as set no. 1, it has three layers with network topology of 4-5-2 ( Figure 2). Output response for lysine when allied with sigmoidal function for hidden layer and linear function for output layer produces RMSE, R, and DC are at values of 1.83, 0.98, and 0.85, respectively. In addition, the output that predicted methionine using the same topology of neural network registers RMSE, R, and DC at values of 0.61, 0.99, and 0.85. This network consequently compromises a good bias and variance, and model explanation could promote a good generalization. It was reported in literature that one hidden layer is typically adequate to provide an accurate prediction, and it could be the first choice for any practical feed-forward network design [22]. Hence, a single hidden layer network was used in this study. Figure 3 illustrates the level of importance (in percentages) or effectiveness of each factor (medium constituent) when analysed with Neural Power. The highest level of significance was attributed to fish meal (37.79%), followed by molasses (34.16%), glutamic acid (15%), and initial medium pH (13.05%). Numerous investigators have looked for ways of producing microbial amino acids using inexpensive media. In this study, P. pentosaceus RF-1 is shown to tolerate fish meal in the medium formulated as it is required for growth and amino acids biosynthesis. Fish meal provides not only a relatively larger proportion of proteins and nucleic acid but also more growth factor as compared to other nitrogen used [2,32,38].
The second-ranked molasses contain a high concentration of C6 to support the fermentation process; it is also an enriched source of 'B' vitamin [2,28,31]. However, this study revealed that utilization of molasses is rather limited to not more than 12 g/L in the medium due to inhibitive effect on the growth of P. pentosaceus RF-1. In fact, very high concentration of molasses would obviously darken the medium, or it could provide a complex Maillard reaction with fish meal for P. pentosaceus RF-1 to effectively grow.
Glutamic acid does not simply function as an energy source, but also as a precursor for nucleotides, guanosine triphosphate, purine, and pyrimidine, hence providing an essential component for the cell replication [33]. Most LAB depends on the addition of glutamic acid into the medium [34]. It has also exhibited a growth action of LAB in amino acid biosynthesis [4,6]. Figure 4 shows the three-dimensional plots describing the interaction of four factors on lysine biosynthesis as predicted by the best ANN network that are quite similar to representation by RSM (data not shown). From Figure 4(A), too high an increase in molasses concentration eventually causes a reduction in lysine whilst biosynthesis actually peaked in the mid-range of fish meal. On the other hand, Figure 4(B) depicts that too low of pH would inhibit product secretion but an increase in the pH range actually causes not too much improvement in lysine biosynthesis as compared to molasses. Figure 4 (C,D) interestingly shows that in some area of response curve, glutamic acid has an inverse relation with molasses and fish meal, lower concentration of glutamic acid is actually preferred when the concentration of molasses or fish meal was increased to a very high proportion in the medium formulation. Figure 5 depicts surface plots for methionine biosynthesis with the interaction between molasses with fish meal ( Figure 5(A)), initial pH with molasses ( Figure 5(B)), glutamic acid with molasses ( Figure 5(C)), and glutamic acid with fish meal (Figure 5(D)) as modelled by the neural network. When molasses concentration was fixed at 10 g/L, methionine biosynthesis increases, as glutamic acid and fish meal concentrations were raised to a certain level, methionine production starts to regress. The interaction between initial medium pH with molasses ( Figure 5(B)) and glutamic acid with molasses ( Figure 5(C)) shows similarity in trend.
Comparison between OFAT, RSM, and ANN Table 7 shows the follow-up experimental results that validate the new medium formulations as suggested by ANN as well as RSM simulation when compared to the optimal points from OFAT which are as follows: molasses (5 g/L), fish meal (5 g/L), glutamic acid (0.3 g/L), and initial medium pH 7. Validation experiments were required to verify the suggested maximum achievable concentration of lysine and methionine from the fermentation of P. pentosaceus RF-1. Based on the best point optimization function in Design Expert which possess a desirability factor closest to 1.0 (most desirable condition), RSM simulation had suggested that molasses to be set at 9.86 g/L, fish meal at 10.06 g/L, glutamic acid at 0.91 g/L, and initial medium pH 5.3. As for ANN, forecasting on the best possible medium formulation was made by solving the previous best network model through Rotation Inherit Optimization solver module of NeuralPower which in turn yielded a different recipe: molasses (10.02 g/L), fish meal (18 g/L), glutamic acid (1.17 g/L), and initial medium pH (4.26). All three conditions were tried on P. pentosaceus RF-1 subjected to 18-h batch cultivation. Figure 6 depicts the comparison between OFAT, RSM, and ANN recipes on P. pentosaceus RF-1 cell concentration, glucose consumption, and lysine-methionine biosynthesis. Observation made on the three media formulated via OFAT, RSM, or ANN indicates that P. pentosaceus RF-1 initially experienced a lag phase in the first 2 h of cultivation. Exponential growth phase ensued until 12-h cultivation for culture to gain high cell growth (X max ) before reaching stationary phase. For medium using ANN and RSM formula, measured maximum cell growth at 12 h of cultivation was approximately 1.24 § 0.046 and 1.154 § 0.019 g/L, respectively. Cell density was slightly higher in OFAT-based formulation with X max achieved at 1.31 § 0.01 g/L. During exponential phase of cell growth, the maximum amount of lysine-methionine production was also observed, implying that the biosynthesis of lysine-methionine is a growth-associated process.
Generally, the statistical approaches of RSM and ANN are sequential strategies which enable us to design, analyse, and find the optimum level and assessing the interrelationship effects of factors leading to the higher growth of P. pentosaceus RF-1. Both approaches have a similar prominence on lysine-methionine biosynthesis by P. pentosaceus RF-1. It was found that actual biosynthesis of lysine and methionine using ANN suggested recipe produced 16.52 § 0.18 and 4.53 § 0.03 g/L against the predicted values of 14.45, and 4.34 g/L, respectively. It was apparent that the medium formulated through ANN slightly improved on the biosynthesis of lysine-methionine by P. pentosaceus RF-1 when cultured in medium proposed by RSM (an increase of 4.8% for lysine and 7.6% for methionine).
Conclusion
The statistical based models provide good predictions for the independent variables regarding lysinemethionine production whereby the superior ANN signified more precision among the predictions made. RSMderived formulation demonstrated maximum lysine and methionine biosynthesis at 15.77 § 0.10 and 4.21 § 0.08 g/L, respectively. The predicted values from RSM model were somewhat comparable with the experimental values obtained. Finally, as a measure of comparison, it is quite apparent from the data obtained that the improvement of medium constituent undertaken through a more systematic statistical approach has a sound merit given that ANN-and RSM-based formulation managed to encouragingly increase the biosynthesis of lysine-methionine by as much as 100% against that of OFAT method. | 2019-04-02T13:07:48.714Z | 2017-05-30T00:00:00.000 | {
"year": 2017,
"sha1": "c003884597cf39838d87dd02c0015c510b824b7a",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13102818.2017.1335177?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "c9f61fa744edc9c59905ba5ebe75b5606746094f",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
253529583 | pes2o/s2orc | v3-fos-license | Quality and Health Risk Assessment of Groundwaters in the Protected Area of Tisa River Basin
This study was conducted in order to assess the chemistry (41 metalloids and heavy metals and 16 physico-chemical indicators) of groundwater sampled from the protected area of the Tisa River Basin during the months of 2021. Pollution indices were used in order to determine the potential metal pollution level. Consequently, a non-carcinogenic risk assessment of metal through the ingestion of water was done. The results indicated general contamination with ammonium, chloride, iron, and manganese. The samples were rich in Cu, Mg, and Pb, but lower than the maximum limits. Significant correlations were noticed between Al-Fe, Mn-Fe, Mn-Ni, and Cr-Zn, as well as the metal content and pollution index scores. The metal pollution indices indicated three pollution levels (low, medium, and high) based on the metal content and standards regarding the water quality used for drinking purposes. The pollution indices scores ranged from 1.52–41.2. A human health risk assessment indicated no potential non-carcinogenic risk for the studied metals through the consumption of groundwater. The results of three different tools (chronic daily intake, hazard quotient, and hazard index) were below the critical value, except for the aluminium in two samples. This study is one of the first attempts to evaluate the quality of groundwater sources associated with the human health risks of the studied metals from the Tisa River Basin protected area. Based on this research, strategies for managing and controlling the risks can be developed.
Introduction
One of the most significant and valuable natural resources on Earth is represented by water, especially water used as drinking water source [1]. It is estimated that 1.8 billion people (28% of the world population) use untreated water, while 1.2 million (18% of the world population) use water sources with high sanitary risks [2]. The terrestrial ecosystems depend on groundwater in different ways, including seasonally or continually [3]. Groundwater, particularly alluvial aquifers, is a significant source of drinking water and minerals, especially in developing countries with rural and semi-urban populations; therefore, quality assessment and continued and extensive monitoring are of serious concern. Water contains many minerals, nutrients, and dissolved substances. Unfortunately, all water sources, including groundwater, are contaminated and altered due to the presence of toxic contaminants and elements entering the water systems through the hydrological cycle, which also implies continuous degradation [1].
The quality of groundwater depends on the geological structure of the area. The natural pollution (including biological processes, such as weathering, precipitation, ion exchange, and dissolution) and anthropogenic pollution (including industrial and agricultural activities) influence and alter the chemical composition [4,5]. The balance and functionality of groundwater sources depend on the physico-chemical and microbiological activities of the water system. Climatic factors (such as air humidity, precipitation, and temperature) are responsible for the groundwater supply [6]. However, the action of factors is influenced by soil, vegetation, the hydro-physico characteristics of geological formations, and surface leaks [7]. Hydrologic factors (including surface stagnant waters, shallow runoffs on slopes, and total leakage from the hydrographic network) influence the supply and groundwater regime. This influence is associated with the interactions between the balance elements of the drainage basin and groundwater. Hydrogeochemical processes (such as cation exchange, mineral dissolution, groundwater mixing, transpiration, and evaporation) influence and control the characteristics of groundwater sources [8].
Heavy metals are considered significant pollutants due to their high bioaccumulation (in tissues), biomagnification (through the food chain), toxicity, and persistency if they exceed the maximum allowable concentrations (MACs), which causes diverse diseases, such as liver crisis, skin irritation, and kidney and cardiovascular affections [9]. Epidermal absorption, inhalation, and food and water ingestion are the main sources of heavy metal accumulation in human and animal bodies [10]. Sources of heavy metals in the water systems are represented by natural processes, such as the weathering of rocks (also known as the geographic heterogeneity) rich in minerals and anthropogenic factors, such as industrial wastes, sewage leachates, municipal waste disposal, and the inappropriate use of pesticides and fertilizers used in agriculture [1,11,12].
Lately, the quality of groundwater, pollution sources, and health risks are being investigated worldwide. For example, in Nigeria, China, Romania, and India, the results indicated insecure drinking water sources led to health issues, especially in infants, particularly due to anthropogenic actions [13][14][15]. The quality of the groundwater and its risks can also be evaluated using diverse mathematical instruments; for example, pollution indices, quality index, or health risk indices.
In the present study, a series of chemical parameters were determined and assessed from groundwater samples collected from the protected site of the Tisa River Basin, such as the pH, the electrical conductivity, the oxidation-reduction potential, the temperature, the saturation level of oxygen, the turbidity, the total hardness, the content of dissolve oxygen, the total dissolved solids, the content of nutrients, and the presence of heavy metals. All chemical indicators are significant in the global assessment of water quality. Water has a crucial role in maintaining the balance of aquatic and terrestrial ecosystems in the Tisa River protected areas and generally in maintaining the functions of ecosystems as support for biodiversity. The anthropogenic pressures on the available natural resources need to be diminished or even stopped in order to assure balance between the conservation of the biodiversity in both protected Tisa River sites and the needs of inhabitants. This balance has to relate to the principles of sustainable development principles to meet the long-term needs of current generations without compromising those of future generations. Accordingly, comprehensive pollution indices were applied for the first time and analyzed in order to determine the potential pollution level of the waters and the human health risk indices to then evaluate the non-carcinogenic risks associated with the studied contaminants. The water typology of samples was also analyzed by using four different diagrams (Piper, Gibbs, Stiff, and Schoeller). The obtained results produce significant data and evidence regarding the groundwater from protected sites in the Tisa River Basin (which is also a source of drinking water) and better risk management and pollution prevention.
Study Area Location
Natura 2000 ROSCI0251 the Tisa Superioara site contains the alluvial plain and terraces from the left shore of the Tisa River (Figure 1), which is part of the upper course or the river and represents the border between Romania and Ukraine (cutting the Maramures Depression from east to west). In the Tisa meadow, numerous habitats have formed, including Piatra, Teceu Mic, Remeti, Sapanta, and Campulung de Tisa. The anthropization degree is high, while the anthropogenic pressure is moderate to critical, and it manifests under a variety of forms: localities, agricultural activities, animal husbandry, traffic, abandoned household and construction wastes, fire vegetation, clearance, and sand and gravely recovery. The groundwater sources from the study area are at low and deep depths. The groundwater bodies are made of gravel and boulders situated in plain areas and in the alluvial plains of the rivers. Maramures Depression from east to west). In the Tisa meadow, numerous habitats have formed, including Piatra, Teceu Mic, Remeti, Sapanta, and Campulung de Tisa. The anthropization degree is high, while the anthropogenic pressure is moderate to critical, and it manifests under a variety of forms: localities, agricultural activities, animal husbandry, traffic, abandoned household and construction wastes, fire vegetation, clearance, and sand and gravely recovery. The groundwater sources from the study area are at low and deep depths. The groundwater bodies are made of gravel and boulders situated in plain areas and in the alluvial plains of the rivers.
Sampling and Preservation
During each month in 2021, a sample was taken from 12 dug wells located on properties in five localities situated in a protected area ( Figure 1). The dug wells were open on the property of the inhabitants and carved at a depth of 4-6 m with an 80 cm diameter. They are made of cement-asbestos or rock tubes and fitted with a pulley system for water use.
The sampling was performed according to standard procedures (SR ISO 5667-23:2011; SR ISO 5667-3:2013). Clean high-density polyethylene bottles, rinsed with the water sample, were directly inserted at 10 cm depth into the groundwater, allowing them to fill without air. The physico-chemical parameters were determined in situ using portable equipment. For the trace metals content analysis, samples were acidified with 65% nitric acid until pH 1-2 to prevent precipitation and retention of metals on the walls of the sampling bottles. All samples were preserved by refrigeration in thermal boxes protected from the sunlight and transported to the laboratory for analysis within 24 h. Three water samples were taken at each sampling station.
Sampling and Preservation
During each month in 2021, a sample was taken from 12 dug wells located on properties in five localities situated in a protected area ( Figure 1). The dug wells were open on the property of the inhabitants and carved at a depth of 4-6 m with an 80 cm diameter. They are made of cement-asbestos or rock tubes and fitted with a pulley system for water use.
The sampling was performed according to standard procedures (SR ISO 5667-23:2011; SR ISO 5667-3:2013). Clean high-density polyethylene bottles, rinsed with the water sample, were directly inserted at 10 cm depth into the groundwater, allowing them to fill without air. The physico-chemical parameters were determined in situ using portable equipment. For the trace metals content analysis, samples were acidified with 65% nitric acid until pH 1-2 to prevent precipitation and retention of metals on the walls of the sampling bottles. All samples were preserved by refrigeration in thermal boxes protected from the sunlight and transported to the laboratory for analysis within 24 h. Three water samples were taken at each sampling station.
Experimental Methods
Groundwater samples were studied in order to evaluate the chemical components and therefore their quality. A number of 32 heavy metals (Ag, As, Au, Bi, Cd, Co, Cr, Cs, Cu, Fe, Ga, Ge, Hf, In, Ir, Mn, Mo, Nb, Ni, Pb, Pd, Pt, Rb, Rh, Sn, Sr, Te, Ti, Tl, V, Zn, Zr), nutrients (NH 4 + , NO 2 − , NO 3 − , Cl − , PO 4 3− , SO 4 2− , CO 3 2− , HCO 3 − , Al, Ba, Be, Ca, K, Li, Mg, Na, and Sr) and physico-chemical indicators (pH, electrical conductivity-EC, oxidation-reduction potential-ORP, total hardness-H t , the content of dissolve oxygen-DO, turbidity-T, and total dissolved solids-TDS) were analyzed using the 12 samples collected during every month of 2021. The oxidative-reduction potential, dissolved oxygen, oxygen saturation, and the pH were analyzed according to SR 3 2− , HCO 3 − , and total harness were determined according to the American Public Health Association APHA (1999) and SR ISO 6059-2008. The anion content was determined according to ISO 9297-2001 and STAS 3265-86 using a Hach Lange SL1000 portable equipment and a Perkin Elmer Lambda 25 spectrophotometer. The metal content was analyzed using mass spectrometry with the help of a Perkin Elmer NexlON 300S inductively coupled plasma mass spectrometer, according to SR EN ISO 15586-2004. The samples were prepared via acidification with 65% HNO 3 (Merck), followed by heating at a controlled temperature, pressure, and filtering with 0.45 µm acetate cellulose filters. The methods were verified by analyzing internal standards, blanks, and triplicates, with a recovery ranging from 89% to 105%. The equipment was calibrated with standard solutions traceable to SRM from NIST Certipur.
Statistics and Water Typology
The results are represented as the mean value in 2021 with the standard deviation calculated based on the values obtained in the 12 months of 2021.
The water typology was determined by using different plots (Piper, Gibbs, Stiff, and Schoeller). Piper was based on the amounts of the major cations (Ca 2+ , Mg 2+ , Na + , and K + ) and anions (Cl − , CO 3 2− , HCO 3 − , and SO 4 2− ), indicating the various types of water [16]. Gibbs was plotted based on the total dissolved solids content and the ratios of Na + /(Na + + Ca 2+ ) and Cl − /(Cl − + HCO 3 − ) [17]. With the help of the Gibbs plot, the main chemical processes in the groundwater resources were assessed, such as the interaction between the water and rocks, the evaporation-crystallization process, and the atmospheric precipitation [18][19][20]. The Gibbs plot is based on the ratio of two main ions-Cl − /(Cl − + HCO 3 − ), Na + /(Na + + Ca 2+ ) -related to the total dissolved content [18][19][20]. The Stiff and Schoeller plots are the graphical representations of the major cations and the anion content in the water samples [21].
For the current study, the free versions of XLStat (Addinsoft, New York, NY, USA), Microsoft Excel (version 2210, Microsoft Corporation, Washington, DC, USA), and AqQa, GW_Chart version 1.29 software (US Geological Survey, Reston, VA, USA) were used for statistical calculations and obtaining the diagrams of interest. For the calculations, the mean value of results obtained in all studied months in 2021 was used.
Pollution Indices
The pollution status of heavy metals can be assessed by applying pollution indices. Two of the most commonly used heavy metal pollution indices are PI (Pollution Index) and HEI (Heavy metal Evaluation Index).
Pollution Index (PI)
The suitability of water for human consumption and the overall quality of water were evaluated using the PI [22,23]. PI was calculated based on several chemical parameters (heavy metals), guideline values, and specific subindices. The applied guideline values followed the regulations established by the World Health Organization for the quality of water. PI was calculated with the help of the following equation (Equation (1)): where Q i is the subindex of the ith chemical indicator or the ration between the monitored value of the heavy metal and the guideline value (Q i = (M v/ G v ) × 100); W i represents the unit weight of the ith chemical parameter (W i = 1/G v for each heavy metal); n is the total number of the considered heavy metals; and M v and G v are the monitored and guideline values of the chemical parameters [23]. The PI scores classify the waters into one of the three pollution level categories. PI < 15 indicates a low pollution level, while 15 < PI < 30 indicate a medium pollution level, and PI > 30 indicates a high pollution level [22].
Heavy Metal Evaluation Index
HEI is an elementary method reported strictly at the guideline values based on the heavy metal content. This method was applied by using the following equation (Equation (2)), according to Edet and Offiong [22]: where M v is the monitored value of the studied heavy metal and G v is the applied guideline value for the heavy metal [24]. The used guideline values were generally those established by national and international legislations [25,26] According to Gharderpoori [24], there are three classes of pollution: low (HEI < 10), medium (10 < HEI < 20), and high pollution (HEI > 20).
Health Risk Assessment
In order to assess the health risk (non-carcinogenic) based on the oral intake of water contaminated with metals, different tools can be used, such as the chronic daily intake (CDI), the hazard quotient (HQ), or the hazard index (HI) [27][28][29]. The indices were calculated by using the following equations (Equations (3)- (5)): where C represents the metal concentration (mg/L) and IR, ED, and EF are the ingestion rate (2 L/day), exposure duration (30 years), and frequency (365 days/year). BW and AT are the body weight (70 kg) and the average exposure time (365 × ED). RfD represents the reference dose for each contaminant according to the Integrated Risk Information Systems [30]. The reference doses for each chemical are 0.004 mg/kg As, 1.5 mg/kg Cr, 0.0005 mg/kg Cu, 0.14 mg/kg Mn, 0.02 mg/kg Ni, 0.004 mg/kg Pb, 0.3 mg/kg Zn and 0.00143 mg/kg Al [30]. The HQ and HI scores indicate whether the studied water presents non-carcinogenic risks to contaminants if it is used for drinking purposes. In this case, HQ > 1.0 and HI > 1.0 indicate waters that pose health risks due to the analyzed metals, while HQ < 1.0 and HI < 1.0 indicate no risks.
Water Quality Characterization and Effect on Human Health
The physico-chemical characteristics of the studied groundwater samples are presented in Table 1. According to the pH values, the waters are weak, basic, and neutral, indicating the presence of weak base salts in the soil near the water system [31]. Also, the decrease of the pH is related to the increase of CO 2 , while the increase of the pH is associated with the increase of the alkalinity and HCO 3 − in the water [32]. The pH depends on the partial pressure of CO 2 , dissolved matter, and temperature, characterizing and influencing the chemical and biological processes nonetheless harmful to human health, and it is controlled by the HCO 3 − , CO 3 2− , and CO 2 equilibrium systems [31,33] The oxidation-reduction potential (ORP) presents low values (Table 1), except for samples 5 and 12. It represents a significant indicator in the oxidative disinfection processes of the water. Disinfectants consume electrons, while contaminants with reduction characteristics donate electrons. In the case of well water, chlorine is used as a disinfectant due to the action of compounds (hypochlorous acid) released by the reaction of chloride with water [13,33]. The decrease of the ORP increases the need for chloride due to the contaminants in the water caused by reduction agents. Sample 8 is characterized by low EC and TDS, indicating a low amount of dissolved inorganic matter in ionized form coming from surface catchments. Sample 9 has a high EC, indicating high salinity and a high amount of TDS originating from infiltrated rainwater, which dilutes the groundwater and evaporates [34]. High EC and TDS are likewise a result of anthropogenic activities. They indicate the total degree of ion concentrations and their mobility. High EC and TDS modify the taste of water [34].
The concentration of dissolved oxygen in water depends on the pressure and the temperature. Dissolved oxygen decreases in the presence of organic matter due to its oxidative degradation based on its oxygen uptake [34]. The increase in oxygen consumption is a response to water eutrophication caused by nutrient (N, P) excess. A low oxygen concentration (<5 mg/L) induces stress on the aquatic habitats and ecosystems and increases the bacteria population. The presence of fertilizers used in agricultural practices also influence the bacteria development [34].
High turbidity (sample 4) appears during strong precipitation falls and floods specific to the rainy seasons, causing siltation and sedimentation. High sedimentation and siltation are the conditions needed to increase bacteria population and metals, causing pollution [32]. The turbidity of water is caused by the presence of particulate or suspended matter which influences the penetration of light into the water [35].
Water samples are characterized by low amounts of NO 2 − and NO 3 − , with values lower than the MACs. Sources of NO 3 − and NO 2 − are related to agricultural activities (such as the use of organic and chemical pesticides and fertilizers based on nitrogen and the degradation of organic waste), household activities (such as septic tanks), and industrial activities (including leaching), but they can also occur naturally (via the degradation of proteins) [36]. Health issues, namely spleen hemorrhaging or diuresis, appear if exposure to NO 2 − and NO 3 − occurs [34]. NO 3 − is very mobile in soil and soluble in groundwater sources, and it precipitates in dry conditions as a mineral [37]. NH 4 + exceeds the MAC two to seven times in all the samples. Samples 1-11 are rich in NH 4 + , which could lead to negative effects on human health if consumed. NH 4 + can react with Cl − and form chloramines [25]. Sources of NH 4 + are represented by areas rich in gas and oil resources [38]. This ion is an oxidized and stable form compared to NO 3 − which also originates from agricultural activities, local pedoclimatic variability, or hydrological conditions [39].
Results regarding the total hardness (H t ) indicate a reduction of Ca and Mg (except in the soft water samples 1 and 3 and samples 8, 10, and 11). Waters with lower hardness as the MAC, or soft waters, are characterized by corrosivity and low buffering capacity [25]. Cation exchange, weathering processes of igneous rocks (including feldspar, amphibole, and pyroxene groups) and limestone, wastewaters, and industrial activities are all sources of Ca [33,37]. Mg is significant for the human body, ensuring well-functioning cells, maintaining the blood sugar level, and preventing endocrinologic, cardiologic and neurologic diseases, while a high amount of Mg could cause paralysis, nausea, and laxative effects [13,33].
Run offs or sewage discharges of fertilizers are potential sources of PO 4 3− [34]. Intensive agricultural activities lead to the increase of phosphorus in water systems, favoring the excess development of algae or eutrophication [40]. Eutrophication negatively affects the quality of water (including its taste and color) and the functionality of ecosystems and biodiversity [41]. In time, the intensive use of chemical and natural fertilizers increases the amount of PO 4 3− in the groundwater systems.
Samples 2 and 6 exceed two times the MAC established for Cl − (250 mg/L) correlated to the highest NH 4 + amounts, which is responsible for the salty taste. Those value exceedances have negative effects on agricultural crops, on human health (by affecting people with cardiovascular and kidney affections and causing laxative effects), and on household systems by corroding the plates and pipes [13,33]. The sources of Cl − are represented by anthropogenic activities (including the use of fertilizers, CaCl 2 , and domestic sewers), but also by contact with soil and rocks [37]. The presence of Cl − in the water systems increases the electrical conductivity and the corrosivity implicitly. In the metallic pipelines, Cl − reacts with the metallic ions, forming soluble salts and increasing the metal content in the water (or the protective layer of oxide). Cl − and Na + are important regarding water quality because they are the most abundant electrolytes in living bodies and they play a role in acid-base balance and osmotic pressure. NaCl, MgCl 2 and CaCl 2 are used extensively in the chemical industry (such as for the production of NaClO, NaClO 2 , and NaOH) and for road defrosting [13].
The HCO 3 − and CO 3 2− sources of ions could be the natural dissolution of soil (humic acids) and rocks (including silicate minerals, limestone, and dolomite), atmospheric CO 2 , sulphate reduction processes (bacteria-organic matter), anthropogenic activities, or due to the respiration of aquatic organisms [37]. More than 50% of the samples twice exceeded the MAC established for the HCO 3 − content (200 mg/L). The HCO 3 − is influenced by the dissolved CO 2 , salts, cations, the pH, the temperature of the water, and other dissolved salts [21,39]. HCO 3 − corelates to the hardness. Sources of high amounts of HCO 3 − are related to the dissolution of soil and rocks [39]".
The studied samples are not rich in SO 4 2− (the use of fertilizer), the mineral constituents of the water, the dissolution of sulphate minerals, and the geological profile of the soil could be the sources of SO 4 2− ) [34]. Water rich in SO 4 2− could affect human health (such as by leading to cancer, heart diseases, and birth defects) [42].
The studied waters are rich in a variety of metals as shown in Table 2. The results are represented as the mean value of the samples obtained during 2021, with the standard deviation calculated using the values obtained in the 12 months of 2021. The presence of B, Ba, Li, Ga, and Sr (which are natural elements ubiquitous in the environment) is due to water-rock (including micas, granites, amphiboles, and schists) interactions [13,25]. The heavy metal content is high, and in the case of As, Fe, and Mn, it exceeds the MACs, with 1.0 µg/L for As in sample 2 and Fe in samples 5, 6, and 8. The household activities (such as leakage and waste), the industrial activities (including discharges and wastes), and the agricultural activities (including the use of herbicides, pesticides, and fertilizers) are responsible for the high metal content. Moreover, natural processes (such as water withdrawal, precipitation, and geology) could amplify the increase in metal content [43,44]. On the other hand, some microelements are essential for sustaining human health, such as Mg, Ca, K, Fe, and Zn [39].
Samples 5 and 6 are characterized by the highest Fe concentrations. If consumed, water rich in Fe can negatively affect human health. Diverse affections and diseases occur, such as cardiovascular, liver, gastric, and pulmonary issues, and rash, fatigue, and tingling [9]. The sources of Fe could be the weathering of granite or basic rocks, the chemical decomposition of ferruginous deposits, or the atmospheric exposure, which leads to Fe(II) hydrolysis in the presence of dissolved oxygen and generates Fe(OH) 3 [25,38]. Fe is a nutrient significant for aquatic organisms as well, but a high amount could cause negative effects on human health, such as breathing problems, tingling, and rash [9]. The release of Fe is influenced by the variation of pH, dissolved oxygen, alkalinity, organic matter, and micro-organisms [34]. A metallic, unpleasant taste and mud odor characterize Mn-rich water (samples 5, 6, and 8), which may cause apathy, muscular pain, and anorexia [1,8]. Sources of Mn are represented by industrial activities (such as the production of alkaline batteries or cleaning products), agricultural activities (including the use of fungicides and fertilizers), or mining activities [45]. Nevertheless, Mn is also an abundant element naturally found in the crust of Earth [45]. The presence of Mn in the water distribution system forms deposits that could slough off as a black precipitate. The nervous system is affected by the ingestion of food and water contaminated with Mn (it may lead to Parkinson's disease and altered cognitive and motor functions) [45].
The high values of Na (sample 6) could be caused by the dissolution of soil salts and rock (forming minerals), septic tank infiltrations, and cation exchange interactions between the clay fraction and groundwater, suggesting significant water-rock interactions [21,46]. According to Petrovic [38], water with a considerable amount of Na is characterized by rich mineralization processes implying a high number of trace-elements. A high Na concentration causes heart, renal, and neurologic diseases [47]. Individuals with renal and cardiovascular affections need water with little Na [47]. The geological structure (alkali feldspar), the processes of ionic exchange (the adsorption of Ca from the rock and the enrichment of water with Na), the processes of alienation of aluminosilicate minerals of sodium, and the active weathering processes are responsible for the presence of Na in water samples [13,33].
Samples 7-10 are characterized by high amounts of K, exceeding the MAC two to four times and having the use of chemical and organic manure or human waste as potential sources [33,39]. High amounts of K in water are related to the use of fertilizers rich in K in agricultural practices [46].
Water rich in Al (5), if ingested, could cause chromosome aberrations in barley meristem cells. However, a low amount of Al in water poses negative human health effects (noncarcinogenic) [1]. A possible source is the use of Al 2 (SO 4 ) 3 in the water treatment process. Samples 1, 5, and 9 are rich in Ni. The pH, soil, and depth influence the amount of Ni. The Ni amounts that are higher than the background value are related to mining plants and industrial waste. Given the carcinogenic characteristics of heavy metals, Ni combined with Cd, Cr, and As alters and damages the DNA [9].
The highest value of As is attributed to sample 2, which exceeds the MAC, while sample 1 slightly reaches the MAC. After ingestion, As is rapidly absorbed from the gastrointestinal tract and further metabolized [48]. High amounts of As negatively affect human health by causing vascular and skin diseases, vomiting, diarrhea, encephalopathy, and cancer [48]. Due to the geochemical conditions, As present in groundwater is vulnerable to sharp fluctuations [49].
The relatively high amounts of Pb in the studied samples (1,3,4,12) are potentially caused by the improper discharges of industrial activities loaded directly into the groundwater sources, agricultural practices (including fertilizers and pesticides), or natural processes related to the weathering of minerals (such as dolomite, marble, and limestone) [9,50]. Negative health effects could appear in the liver, thyroid, and bones, and they could also lead to high blood pressure, brain damage, infertility, and even cancer [9,50].
The presence of Cd in the studied samples, especially in sample 4, is attributed to natural and anthropogenic sources [12,50]. A source of drinking water that is high in Cd could cause immediate poisoning and diarrhea, damaging the kidney and liver [1,9].
Generally, Cu (sample 8 slightly reaches the MAC) occurs due to natural processes (like rock degradation) and anthropogenic activities (such as mining, municipal, industry, and agriculture activities) as well [1]. Stomach-ache, cerebral pain, and irritated eyes and nose occur if a water rich in Cu is consumed [9].
Sample 1 is also rich in Cr and slightly reaches the MAC, probably due to the presence of magnesiochromite, which are mafic and ultramafic rocks of chromite in which, through weathering processes, Cr ions are released into the water systems [1,50]. According to Ali [12], Cr is a powerful oxidizing agent, and it is entirely adsorbed by aquatic vegetation, indicating direct intake from sediments.
Zn is also a natural element. The interaction of groundwater with the surrounding rocks slowly enriches the water body through delayed exchange (as in the case of sample 1- Table 2) [51]. The content of inorganic carbon and the pH influence the solubility of Zn [25]. Zn is characterized by high mobility in the water systems, induces opalescence, has an astringent taste, and is released into the environment due to worn rubber tires of vehicles and coal combustion [1]. Zn is essential for living creatures, although it is toxic in high concentrations, causing cardiovascular issues, affecting immunity, causing cell mutations, increasing the permeability of the cell membrane, and causing death [9,52].
The release of heavy metals in the study area is related to geological conditions, namely the presence of volcanic rocks (including andesite and rocks rich in sulphide veins) and natural processes implying rocks and minerals (such as degradation, weathering, and oxidation) [50].
Piper and Gibbs Diagrams
A Piper diagram was plotted for all 12 water samples with the help of concentrations of four major cations (Ca 2+ , Mg 2+ , Na + and K + ), four anions (Cl − , SO 4 2− , CO 3 2− and HCO 3 − ), and the TDS. According to the plot and Manoj [53] classification, the studied samples are classified into mixed Ca 2+ -Mg 2+ -Cl − type (sample 2), Na + -Cl − type (samples 3 and 6), Ca 2+ -HCO 3 − type (samples 1, 5, and 4), Na + -HCO 3 − type (samples 7, 8, 10, and 12), and mixed Ca 2+ -Na + -HCO 3 − type (sample 9). Thus, sample 11 has a mixed typology according to the diamond plot. There is no dominant type according to the anion triangle, and there is an Na + -K + type according to the cation triangle ( Figure 2). The presence of silicate, igneous rocks, minerals, and weathering contributes to the dominance of waters of type Ca 2+ -Na + -HCO 3 − . The samples with the Na + -HCO 3 − typology are characterized by the presence of reverse ionic exchange processes of Ca 2+ and Na + and the weathering of albite or other igneous rock minerals [37]. According to Rupias [37], minerals containing Ca 2+ and Na + are susceptible to the weathering processes. diamond plot. There is no dominant type according to the anion triangle, and there is an Na + -K + type according to the cation triangle ( Figure 2). The presence of silicate, igneous rocks, minerals, and weathering contributes to the dominance of waters of type Ca 2+ -Na + -HCO3 − . The samples with the Na + -HCO3 − typology are characterized by the presence of reverse ionic exchange processes of Ca 2+ and Na + and the weathering of albite or other igneous rock minerals [37]. According to Rupias [37], minerals containing Ca 2+ and Na + are susceptible to the weathering processes. The Gibbs diagram indicates three distinct fields, namely evaporation, precipitation, and rock-water interaction dominance areas [37]. According to Figure 3, the majority of studied samples fall into the rock-water interaction dominance, indicating that the water samples originate from the interaction of the chemistry of percolated water under the The Gibbs diagram indicates three distinct fields, namely evaporation, precipitation, and rock-water interaction dominance areas [37]. According to Figure 3, the majority of studied samples fall into the rock-water interaction dominance, indicating that the water samples originate from the interaction of the chemistry of percolated water under the lakes and rock chemistry. The Gibbs diagram indicates three distinct fields, namely evaporation, precipitation, and rock-water interaction dominance areas [37]. According to Figure 3, the majority of studied samples fall into the rock-water interaction dominance, indicating that the water samples originate from the interaction of the chemistry of percolated water under the lakes and rock chemistry. According to Gibbs [18] and Shah [21], a Gibbs diagram indicates the natural mechanisms controlling water systems, such as evaporation, rock, or precipitation dominance. Gibbs plots are based on different physico-chemical parameters (anions and cations) related to the TDS. In the present study, two Gibbs diagrams were applied to all 12 water According to Gibbs [18] and Shah [21], a Gibbs diagram indicates the natural mechanisms controlling water systems, such as evaporation, rock, or precipitation dominance. Gibbs plots are based on different physico-chemical parameters (anions and cations) related to the TDS. In the present study, two Gibbs diagrams were applied to all 12 water samples based on the anion ratio (Cl − /(Cl − + HCO 3 − ) and the cation ratio ((Na + + K + )/(Na + + K + + Ca 2+ )) ( Figure 3). According to the Gibbs plots, generally, the studied water samples are characterized by rock dominance or weathering dominance. The Gibbs ratio ranges from 0.07 to 1.99 in the case of the anion ratio, while the Gibbs ratio related to the cation content ranges between 0.69 and 0.98. This indicates that weathering is the possible source of the hydrochemistry of the studied water samples.
Stiff and Schoeller Diagrams
A Stiff diagram ( Figure 4) is a graphical representation of the major ions identified and determined from the water samples. The used ions are Mg 2+ , Ca 2+ , Na + , K*, SO 4 2− , Cl − , HCO 3 − , and CO 3 2− . Characteristically, anions are placed on the right side of the center axis, while the cations are placed on the left side. This way, equivalent amounts are presented. The amounts are indicated in meq/L (milliequivalents/L). According to the Stiff plot, the dominant types of water are represented by HCO 3 − + CO 3 2− and Cl - (Figure 4). According to the Stiff and Schoeller plots (Figures 4 and 5), the cation content is not notable compared to the anion content. More than 50% of the samples are dominated by the HCO 3 − + CO 3 2− content (samples 1, 4, 5, 7, 8, 10, and 12). Less than 50% of the samples are characterized by high amounts of Cl − (samples 2, 3, 6, and 11). A Stiff diagram ( Figure 4) is a graphical representation of the major ions identified and determined from the water samples. The used ions are Mg 2+ , Ca 2+ , Na + , K*, SO4 2− , Cl − , HCO3 − , and CO3 2− . Characteristically, anions are placed on the right side of the center axis, while the cations are placed on the left side. This way, equivalent amounts are presented. The amounts are indicated in meq/L (milliequivalents/L). According to the Stiff plot, the dominant types of water are represented by HCO3 − + CO3 2− and Cl - (Figure 4). According to the Stiff and Schoeller plots (Figures 4 and 5), the cation content is not notable compared to the anion content. More than 50% of the samples are dominated by the HCO3 − + CO3 2− content (samples 1, 4, 5, 7, 8, 10, and 12). Less than 50% of the samples are characterized by high amounts of Cl − (samples 2, 3, 6, and 11). Sample 9 is dominated by Cl − and HCO3 − + CO3 2− . The same trend and results, expressed in mg/L, are shown with
Correlations between the Metal Content and the Pollution Indices
Pearson's correlation was determined between the metal content (As, Al, Cd, Cr, Cu Mn, Ni, Zn, Fe) and the PI and HEI scores. As indicated in Table 3, a positive correlation is observed between As-Fe, Fe-Mn, Mn-Ni, and Cr-Zn. Also, significant correlations ar established between the metal concentrations and the pollution indices, such as Fe-PI, Fe HEI, As-PI, Al-HEI, and Mn-HEI. The highest PI score correlates with the highest A
Correlations between the Metal Content and the Pollution Indices
Pearson's correlation was determined between the metal content (As, Al, Cd, Cr, Cu, Mn, Ni, Zn, Fe) and the PI and HEI scores. As indicated in Table 3, a positive correlation is observed between As-Fe, Fe-Mn, Mn-Ni, and Cr-Zn. Also, significant correlations are established between the metal concentrations and the pollution indices, such as Fe-PI, Fe-HEI, As-PI, Al-HEI, and Mn-HEI. The highest PI score correlates with the highest As amount, followed by PI correlated with the highest Fe and Mn concentrations.
Pollution Indices
The results regarding the pollution status based on the PI and HEI results indicate three different pollution levels. According to the PI scores, samples 1-6 are characterized by a high pollution level, while samples 7, 9-12 have a low pollution level, and samples 8 and 12 are characterized by a medium level of pollution, as indicated in Figure 6. The mean value is 24.8, while the lowest value is 8.90 (sample 10), followed by 7 < 9 < 11 < 12 < 8 < 3 < 4 < 2 < 6 < 5 < 1. Sample 1 is characterized by the highest score due to the highest concentrations of Al, Pb, and Cr obtained for all samples. Figure 6. The mean value is 24.8, while the lowest value is 8.90 (sample 10), followed by 7 < 9 < 11 < 12 < 8 < 3 < 4 < 2 < 6 < 5 < 1. Sample 1 is characterized by the highest score due to the highest concentrations of Al, Pb, and Cr obtained for all samples. Based on the HEI results, sample 5 is characterized by a medium pollution level, while the rest of the samples have a low level of metal pollution. Generally, as indicated in Figure 6, the medium value is 4.05, indicating a low level of pollution. The highest value is 10.4, obtained for sample 5, followed by 7 > 8 > 6 > 1 > 2 > 3 > 4 > 11 > 12 > 9 > 10. The highest score is directly proportional to the highest Fe and Mn concentrations. Certain scores exceed the MACs. Other studies in different parts of the world used pollution index methods in order to determine the pollution level of water. In Guanzhong Plain, China, HPI and HEI scores ranged from 0.33-28.5 and 0.06-4.57, indicating a low level of metal pollution [29]. In Angul, India, waters were characterized by a low pollution level, as reflected by the HPI scores (30-87) [54]. Based on the HEI results, sample 5 is characterized by a medium pollution level, while the rest of the samples have a low level of metal pollution. Generally, as indicated in Figure 6, the medium value is 4.05, indicating a low level of pollution. The highest value is 10.4, obtained for sample 5, followed by 7 > 8 > 6 > 1 > 2 > 3 > 4 > 11 > 12 > 9 > 10. The highest score is directly proportional to the highest Fe and Mn concentrations. Certain scores exceed the MACs. Other studies in different parts of the world used pollution index methods in order to determine the pollution level of water. In Guanzhong Plain, China, HPI and HEI scores ranged from 0.33-28.5 and 0.06-4.57, indicating a low level of metal pollution [29]. In Angul, India, waters were characterized by a low pollution level, as reflected by the HPI scores (30-87) [54].
In this study's location (Maramures, the north-western part of Romania), HPI results range between 5.5-97.7, indicating no metal pollution in the water samples used as drinking water. HEI scores range from 1.5-14.0, indicating no pollution with metals [55]. Alluvial aquifers situated in the Maramures Depression are studied in the frame of heavy metal pollution assessment with the help of pollution indices (HEI and HPI). The results indicate three types of pollution statuses: low, medium, and high. The HPI and the HEI range from 5.6 to 234 and 0.4 to 59, respectively. The high scores are attributed to the high amounts of Mn and Fe caused by the water-rock interactions and the presence of organic colloids and humid materials [56].
On the other hand, in the south-eastern part of the country (Dobrogea), studied waters are characterized by two classes of samples, including unpolluted and polluted with the studied metals [14]. HPI results range between 89.2 and 196 due to the high amounts of Cr, which exceed the MAC. HEI scores range from 0.1 to 1.0, indicating two pollution statutes [14].
Health Risk Assessment
The content of the metals is studied concerning the human health risk assessment. The risk assessment is based on the oral intake of water in the case of adults. The results regarding the chronic daily intake (CDI) and the hazard quotient (HQ) are indicated in Figure 7 and Table 4 The CDI values are indicated in Figure 7. Based on the obtained results, the m relevant chronic daily intake for the studied metals through water intake is represen by Mn, followed by Zn < Al < Cu < Cr < As < Ni < Pb < Cd. Generally, the scores rev that the highest CDI values are obtained in the cases of samples 5, 8, and 1. The high The CDI values are indicated in Figure 7. Based on the obtained results, the most relevant chronic daily intake f22r the studied metals through water intake is represented by Mn, followed by Zn < Al < Cu < Cr < As < Ni < Pb < Cd. Generally, the scores reveal that the highest CDI values are obtained in the cases of samples 5, 8, and 1. The highest metal consumed and absorbed through water ingestion is Zn with a mean value of 1.6 × 10 −3 mg/kg-day ( Figure 7). The Zn concentration is increased due to the interactions between water-rock. A higher concentration could affect the water quality and, if consumed, human health, causing cardiovascular disease and death [9,52]. Mostly, Mn and Al are also absorbed in a high amount, with mean values of 1.3 × 10 −3 mg/kg-day Mn and 0.9 × 10 −3 mg/kg-day Al, which could be a major health risk if ingested, affecting the neurologic system and cells [1,45].
When calculating HQ and HI (which are conservative health risk assessment tools), the non-carcinogenic risk related to toxic element exposure (Al, As, Cr, Cd, Cu, Mn, Ni, Pb, and Zn) is estimated related to the ingestion of water (oral toxicity) in the case of adults. HQ scores depend on the body weight, the volume of water consumed by the inhabitant, the exposure frequency, and the duration. Table 4 indicates the mean scores obtained for HQ. Generally, overall HQ results are lower than 1.0, indicating that if consumed, the drinking water samples present no non-carcinogenic risk associated with human health, except for samples 5 and 10, which are characterized by high amounts of Al. Consequently, Al contributes most to the exposure of non-cancer risk. Sources of Al are related to the chemical processes applied in the water treatment. Cells can be affected by the presence of Al in ingested water [1]. Mn and Zn follow Al as the main contributors to ingestion exposure and its human health impact. The HQ scores were all negative, ranging from 2.0 × 10 −5 to 8.4 × 10 −2 , except for two samples characterized by high amounts of Al. The highest HQ value is obtained in samples 5 and 10, whose values are 2.4 and 2.0, respectively, while the lowest values are obtained in samples 7, 2, and 12. Mostly, the result of the present study indicates that a chance of pollution with the studied metals can occur which affects human health through the ingestion pathway. The hazard quotient caused by metal intake through water ingestion indicates a leading approach comparable to different studies that helps estimate health risks and protects the population. Similarly, HQ is applied in different parts of the country in order to assess the risk of ingestion metals through water. Studies in the south-eastern part of the country indicate HQ scores lower than the critical value, varying between 2.6 × 10 −2 and 2.8 × 10 −2 for HQ Cd . HQ Cr varies between 3.5 × 10 −2 and 4.3 × 10 −1 , HQ Cu varies between 9.5 × 10 −4 and 1.3 × 10 −3 , HQ Ni varies between 2.4 × 10 −3 and 8.2 × 10 −3 , HQ Pb varies between 2.1 × 10 −5 and 2.3 × 10 −5 , and HQ Zn varies between 1.2 × 10 −4 and 4.1 × 10 −4 [14].
HI was calculated by accumulating the HQ for each studied metal. HI results indicate no potential risk related to ingesting the studied waters for the majority of the studied samples, except for samples 5 and 10. HI scores range between 0.03 and 2.5 for samples 5 and 10, and the lowest value is obtained in the case of sample 7, followed by 2 < 12 < 8 < 11 < 3 < 4 < 6 < 1 < 9.
Conclusions
According to the obtained results, the studied groundwater samples collected from the protected Tisa River Basin are characterized by high amounts of Cl − , NH 4 + , Fe, and Mn, exceeding the MACs. The samples are also rich in Pb, Cu, and Mg, but with amounts lower than the MACs. The Piper diagram indicates that the studied water samples are generally classified into five types of water (Na + -Cl − , Na + -HCO 3 − , Ca 2+ -HCO 3 − , mixed Ca 2+ -Mg 2+ -Cl − , and Ca 2+ -Na + -HCO 3 − ). According to the Gibbs plot, all water samples are characterized by a weathering or rock dominance. Based on the Pearson correlation, a positive correlation is noticed between Cr-Zn, As-Fe, Fe-Mn, and Mn-Ni, indicating the same pollution source. A positive correlation is observed between the highest metal content and the score for the pollution indices (PI and HEI). Based on the two metal pollution indices results, three different pollution levels are determined. The risk assessment analysis indicates that there are no non-carcinogenic risks related to the studied metals determined in water samples, except for two samples, which are characterized by high amounts of Al. Consequently, it is recommended that the studied water samples should be further monitored and treated if they are used for drinking purpose. Due to its approach, this study is significant for future research related to determining and assessing the quality of water sources situated in areas where agricultural practices are implemented. This way, the population is informed and aware, and possible negative effects on health related to the ingestion of poor-quality water will be prevented. Sustainable policies and protection policies need to be framed in order to decrease the possible negative effects on human health. This study's results could be used for management mitigation efforts regarding poor-quality water sources and in medicine research. Perspectives for new research relate to the identification of diseases and their negative or positive effects on organs. | 2022-11-16T16:52:55.471Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "56c904e419bd524fd83b0be4d3442373027d5e85",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/22/14898/pdf?version=1668328533",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98a88c277c0b2436206cda37cd807aec6fd36831",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
250285169 | pes2o/s2orc | v3-fos-license | Determinants of Stress Levels and Behavioral Reactions in Individuals With Affective or Anxiety Disorders During the COVID-19 Pandemic in Russia
Introduction Individuals with affective and anxiety disorders are among those most vulnerable to the negative effects of the COVID-19 pandemic. Aim This study aims to analyze the determinants of stress levels and protective behavioral strategies associated with the COVID-19 pandemic in Russian-speaking people with affective or anxiety disorders (AADs). Materials and Methods In this cross-sectional online survey, the psychological distress and behavioral patterns of respondents with self-reported AAD (n = 1,375) and without disorders (n = 4,278) were evaluated during three periods of restrictive measures in Russia (March–May 2020). Distress levels were verified using the Psychological Stress Measure (PSM-25). Results Stress levels among respondents with AAD were higher at all study periods than for those with no mental disorder (Cohen's d 0.8–1.6). The stress level increased (Cohen's d = 0.4) in adolescents (16–18 years) with AAD and remained the same in those without disorders; in youths (19–24 years) with and without disorders, an increase (Cohen's d = 0.3) and a decrease (Cohen's d = 0.3) in the stress were observed, correspondingly; the stress in adults (25–44 years) with disorders did not change and decreased in those without disorders (Cohen's d = 0.4). Individuals with bipolar disorders demonstrated lower stress than individuals with depressive (Cohen's d = 0.15) and anxiety disorders (Cohen's d = 0.27). Respondents with depressive and bipolar disorders employed fewer protective measures simultaneously and were less likely to search for information about COVID-19. Conclusion The presence of affective or anxiety disorders is associated with a more acute response to the COVID-19 pandemic. Apparently, the type of mental disorder influenced stress levels and protective behavior patterns.
INTRODUCTION
Stress associated with the COVID-19 pandemic has a complex multifactorial nature and an ambiguous profile of the behavioral reactions of the population (Fountoulakis et al., 2022). The danger of coronavirus infection has caused a wide range of psychological problems among the population of countries with high viral infection rates (Qiu et al., 2020). The greatest negative impact on mental health has been caused by such factors as: an unprecedented, potentially lifethreatening situation of uncertain duration and economic consequences; increased family conflicts during large-scale quarantine measures in all major cities; an inconsistent information background with an oversupply of contradictory data (Sorokin et al., 2021;Vrublevska et al., 2021). The mental health consequences of such a crisis, including an increase in suicide rates, are predicted to continue for a long period of time and to peak after the actual pandemic (Pirkis et al., 2021).
Initial results confirmed that individuals with affective disorders are exposed to higher levels of stress, which in turn are associated with maladaptive situational and lifestyle changes occurring in response to the COVID-19 pandemic (Van Rheenen et al., 2020). In such individuals, the maladaptation and levels of preexisting anxiety and depressive symptoms are likely to increase with each subsequent wave of COVID-19 infection because they are more vulnerable to biological, social, and economic disruptions (Dabrowska et al., 2021). Moreover, individuals with affective or anxiety disorders are in high need of many variable factors associated with proper mental health care. Regular access to mental health-care services, medications, stable daily routines, and social interactions are necessary for those with mood illnesses. The psycho-social stress and limited access to the abovementioned elements could significantly affect the anxiety and mood symptoms in individuals with mental disorders (Asmundson et al., 2022). Subsequently, it was found that individuals with affective disorders have an increased risk of COVID-19 infection, as well as an increased risk of hospitalization and death (Diez-Quevedo et al., 2021). Thus, the impact of the COVID-19 pandemic on mental health is not equal for all groups of the population, especially for persons with major psychiatric disorders. Therefore, these imbalances in response to stress associated with the COVID-19 pandemic require more detailed study, taking behavioral reactions and socio-demographic indicators into account.
The study hypothesis is that the presence of affective or anxiety disorders is associated with a more acute response to the COVID-19 pandemic and epidemiological restrictions.
The study aims to analyze the determinants of stress levels and protective behavioral strategies associated with the COVID-19 pandemic in Russian-speaking people with affective and anxiety disorders.
METHODS
The study data were obtained through an extensive online survey conducted among Russian-speaking respondents during the restrictive period introduced as a measure to prevent the spreading of coronavirus infection. The most significant parts of the sample were obtained for 3 periods: • 30 March to 8 April 2020 (1st period)-introduction of the first restrictive measures in Russia due to the worsening of the epidemiological situation; • 29 April to 8 May 2020 (2nd period)-final stage of restrictive measures; • 9 May to 18 May 2020 (3rd period)-cancellation of federal restrictive measures, early days of the post-restriction period.
Participants in the research were invited to complete an anonymous questionnaire via Google Forms, which took about 15 min. The questionnaire was distributed via social networks and on the websites of public organizations and thematic communities (refer to Acknowledgments).
The inclusion criteria were the ability to read Russian and consent to the processing of personal data. The non-inclusion criteria were the absence of values for individual points of the survey when filling in the questionnaire. The questionnaire was based on self-reports on the sociodemographic characteristics of respondents and their place of residence, as well as on self-reports of their health status. The questionnaire, which was distributed in communities of patients with mental disorders, included a question on the presence/absence of a diagnosed affective or anxiety disorder with the option of choosing one of the proposed diagnoses in the questionnaire: depressive disorder, bipolar affective disorder, generalized anxiety disorder, cyclothymia, or dysthymia.
All participants in the study were invited to select any of the proposed concerns about the COVID-19 pandemic and any of the preventative measures they had implemented. Original questionnaire items which were already used earlier (Sorokin et al., 2020) described 10 types of concerns associated with COVID-19 (contagiousness of the virus; risk of isolation; the absence of specific treatment for COVID-19; fear for self-life; risk to the lives and health of relatives; possible financial difficulties; severe social consequences; lack of safety equipment for sale; possible lack of medication for daily intake; and impossibility of traditional way of life) and six behavioral patterns of infection prevention (wearing a mask or respirator; use of antiseptics; hand washing; social distance; and self-isolation). The reliability of these two subsets of dichotomous questions was calculated with the Kuder-Richardson-20 test: for concerns−0.41, for preventative measures−0.6. The results reflected the diversity of emotional and behavioral reactions of respondents, so these levels were considered satisfactory. Individual respondents could also indicate how often they requested information about the pandemic during the last week ranked by eight degrees, ranging from "never" to "hourly".
Psychological stress scale (PSM-25) is 8-point Likert scale ("not at all" to "greatly") used Lemyre in 1990 to assess current stress levels. Translated and adapted version for the Russian-speaking population was used (Vodop'yanova, 2009). The integral indicator of psychological stress in it is the total score, varying between 25 and 200. It reflects the expression of emotional, cognitive, and somatic reactions through the indicators of three subscales identifying three levels of stress. A total of 6 of the 25 questions (nos. 2, 7, 9, 15, 16, and 22) on the psychological stress scale describing somatic stress reactions were evaluated separately. A high score-a sum higher than 155 pointsindicates a state of maladaptation and the need for correction; a score of 154-100 points indicates an average level of stress; lowunder 100 points-indicates a state of psychological adaptation to workloads. In this study, PSM-25 demonstrated excellent internal consistency with Cronbach's alpha 0.949.
The study design was controlled by the independent ethical committee (IRB registration number: ∋ κ--132/20). It was in conformity with the Declaration of Helsinki. It included a collection of anamnestic, socio-demographic data, and clinical parameters after the respondents signed a voluntary informed consent.
Data Cleansing
We analyzed the values of the PSM-25 items to identify irrelevant answers and outliers. We used the scales of the PSM-25 items to calculate for all observations the Mahalanobis distances from the pattern consisting of average values. Then, we filtered out 11 outliers from the original 5,728 records. All outliers produced high Mahalanobis distances and revealed contradictory answers to interrelated questions. We also filtered out seven records with identical values in all PSM-25 items.
As there was no registration for the respondents, we checked the answers to the question: "Are you filling up this form for the first time?" For the repeated applications, we tried to find pairs with similar personal data as age, gender, educational level, marital status, occupation, and city. We identified 48 pairs (96 records) of repeat interviews of the same respondents. Among 48 pairs, we identified 26 where there was not <20 days between interviews. Those 26 pairs were analyzed separately as dependent samples. All 48 records of second interviews were removed from the main sample.
A total of three main grouping factors, including age, length of interview, and type of disorder (with no affective/anxiety disorder as a zero type), were used for extracting groups of records to be compared. We divided respondents into eight age groups and six periods. When comparing groups of records, we mostly used 1-5 age groups and 1-5 periods containing the majority of records.
Exploratory Analysis
We used the ANOVA test, IBM SPSS Statistics (RRID:SCR_019096), to compare the amount and dynamic of distress in groups of respondents with/without affective or anxiety disorders. All groups corresponding to different time periods were separated. We obtained higher levels of distress for respondents with a disorder and different dynamics of distress levels for groups of respondents with/without a disorder (increase/reduction in the distress level).
We used regression analysis to examine whether the total distress level depended on age. For all groups of records, we observed negative dependency between these two variables. As the age of respondents was distributed rather differently in the groups under observation, we had to use more detailed analysis to distinguish the effects of disorder type and age on the distress level.
Hypothesis Testing
When the gender composition of respondents was similar in all groups of observations (16% males and 84% females), the age distribution was essentially different. For example, the average age of respondents with a disorder was about 24, compared with 34 for those without a disorder.
For matching different groups of observations, we excluded random records, so that relative frequencies of ages became equal-not attempting to fit samples to an ideal, but filtering all the samples, so that the total number of records removed was minimal. We solved two optimization tasks: in the first task, we removed as few records as possible; in the second task, we used weights equal to inverse values of the sample sizes. The second task was used when the sample sizes were essentially different.
To compare different groups, we used factorial or one-way ANOVA and estimated standard errors and 95% confidential intervals for average values of dependent variables. We also performed post hoc analysis. When the variable did not match Gaussian distribution, we always used nonparametric tests, specifically repeated Mann-Whitney tests for two independent samples. However, we confirmed the fact that ANOVA tests are robust to the violation of normality for large sample sizes, as in our comparisons, ANOVA and nonparametric tests gave similar results. When testing hypothesis for all the PSM-25 items, we took into account multiple comparisons. However, there was no need to lower the level of significance, as p-values were usually low and there were many positive results among the PSM-25 items.
Sampling Characteristics
Based on the self-report data on the presence of mental disorders, the final sample of 5,662 records was divided into two groups. The research group included 1,375 records (24.1%) containing information on the presence of affective pathology: 590 (10.3%) depressive disorders (including dysthymia), 530 (9.3%) bipolar disorders (including cyclothymia), and 255 (4.5%) anxiety disorders (general anxiety disorder, and panic disorder).
The control group included 4,278 respondents (75.9%) who reported no affective or anxiety disorders.
To assess the age differences, the following subgroups of respondents within the research and control groups were included in the analysis: adolescents from 16 to 18 (1.6 and 1.8%, respectively), young adults from 19 to 24 (2.5 and 4.1%, respectively), and adults from 25 to 44 (19.9 and 42.7%, respectively). In all the subgroups analyzed (age, history of diseases, and specificity of reactions to the pandemic), the male to female ratio in the sample remained stable: 16 and 84%, respectively.
The survey covered respondents living in all federal districts of Russia. Residents of major cities made up 19.2 and 35% of the sample (Moscow and St. Petersburg, with populations of over 10 million and 5 million, respectively). Residents of other cities with populations of over one million accounted for 16.2%. Respondents from cities with a population of less than one million people constituted 29.6% of the sample.
Stress in Comparison Groups
In the exploratory analysis, data were obtained on significantly higher rates of psychological stress (Cohen's d 0.8-1.6) in respondents with affective or anxiety disorders than for those with no mental disorder (Figure 1). At this point, we examined full groups of respondents with no adjustments to the age structures. In factorial ANOVA, we obtained significant differences with p < 2e-8 between groups for the factor of disorder (yes/no) and for the join factor disorder * period. We obtained p=0,051 for the factor of period. Post hoc analysis (least significant difference (LSD) test) confirmed the differences with p < 0,03 for all 2 * 3 = 6 groups except the pair period=2 and period=3 in the control group. For the factor of period, the tests of homogeneity of variances (Hartley F-max, Cohran C, Bartlett'sh chi-square) passed. The test failed for the factor of disorder. However, we can assume that the difference between the groups of respondents with/without affective disorder is too high (p < 1e-15) to be overturned with homogeneity tests.
In all age subgroups and time periods, respondents selfreporting affective or anxiety disorders (research groups) continued to show significantly higher rates of psychological stress than those with no affective/anxiety disorders (control group). It is noteworthy that the differences in stress levels between the control and research groups in the overall sample increased from the introduction of epidemiological restrictions to the period after their cancellation. However, these dynamics were not uniform in individual age groups.
Dynamics of Stress Levels Between Periods of Epidemiological Restrictions
Among the three age subgroups, an increase in stress levels in the research group and a reduction in the control group between the 1st and 3rd periods were observed only among FIGURE 1 | Levels and dynamics of stress for respondents with/without affective or anxiety disorders.
young adults aged 19-24 (Cohen's d=0.32 and Cohen's d=0.30; Figures 2B,E). In all the remaining figures, we performed the Mann-Whiney U test to confirm inter-group differences as all samples were rather far from normal distribution. Adolescents aged 16-18 from the research group showed higher rates of psychological stress in the 3rd period than those interviewed during the introduction of restrictive measures in the 1st period (Cohen's d =0.39, Figure 2A), but no reliable control dynamics were revealed (Figure 2D). Among adults in the control group, a reduction in stress levels between the 1st and 3rd periods was observed (Cohen's d = 0.40, Figure 2F), but there were no reliable dynamics in the research group (Figure 2C).
High levels of stress among young adults in the research group were associated with higher somatic rates on the PSM-25 scale in the 3rd period compared with the 1st period (Cohen's d=0.26, Figure 3A). In contrast, individuals aged from 19 to 24 in the control group who were examined after the removal of the antiepidemic restrictions showed a lower level of somatization than those examined at the beginning of quarantine in the 1st period (Cohen's d=0.40, Figure 3B).
Nosological Characteristics of Stress and Behavior Associated With the Pandemic
The level of stress on the PSM-25 scale was specifically associated with affective/anxiety disorders. Among subgroups of respondents with depressive, bipolar, and anxiety disorders, individuals with bipolar disorders demonstrated significantly lower levels of stress compared with individuals with depressive (Cohen's d=0.15) and anxiety disorders (Cohen's d=0.27) ( Figure 4A).
It is also important to note that stress response characteristics were combined with the modification of protective behavior ( Figure 4B) and the search for information about the pandemic ( Figure 4C) both in the nosological subgroups of the research group and in the control group.
Respondents self-reporting depression and bipolar disorder used fewer protective measures simultaneously compared with the control group. However, there was a significant reduction in the concurrently practiced means of preventing infection only among those who reported depressive disorders (Cohen's d = 0.15), whereas among respondents with bipolar disorders the narrowing of protective measures were negligible (Cohen's d = 0.1). No reliable differences were found between the control group and the subgroup with anxiety disorders.
In the subgroup with depressive or bipolar disorders, respondents were less likely to search for news about the pandemic than those in the subgroup of anxiety disorders (Cohen's d = 0.28 and 0.28, respectively), and in comparison with the control group (Cohen's d=0.17 and 0.16, respectively). Participants self-reporting an anxiety disorder were the most FIGURE 3 | Dynamics of somatization in presence of an affective/anxiety disorder among young adults. Significant differences (A) between "1" and "3" with p = 0.01 and between "2" and "3" with p = 0.005. (B) between "1" and "3" with p = 0.046. FIGURE 4 | Stress levels, anxiety, and behavioral reactions in respondents depending on the presence of an effective anxiety disorder. Significant differences (A) between "bd" and "ad" with p = 0.002. (B) between "hc" and "d" with p = 0.0001 and between "hc" and "bd" with p = 0.043. (C) between "hc" and "d" with p = 0.017, between "hc" and "bd" with p = 0.012, between "hc" and "ad" with p = 0.001, between "d" and "ad" with p = 0.0002, and between "bd" and "ad" with p = 0.0001.
likely to turn to the news (compared with depressive or bipolar disorders, Cohen's d = 0.28 and 0.28, respectively; with Cohen's d = 0.16). Respondents in the control group demonstrated an average frequency of searching for information about the pandemic.
DISCUSSION
Our research has demonstrated that the presence of affective or anxiety disorders is associated with a more severe response to the COVID-19 pandemic in different periods. Based on the sociodemographic characteristics, data on the behavioral reactions of the population and place of residence, as well as on the results of psychometric research on stress levels, we made four main observations. First, stress levels among respondents self-reporting an affective or anxiety disorder were higher at all periods of the study than among those with no mental disorders. Second, the dynamics of stress levels in the research and control groups were heterogeneous and varied across the age subgroups. Third, the type of affective disorder influenced protective behavioral patterns and intensity of searching for information about the pandemic. Fourth, individuals with bipolar disorders had significantly lower stress levels than respondents with depressive or anxiety disorders.
As far as we can ascertain from available literature, this is the first study to provide evidence that multidirectional dynamics of stress during the COVID-19 pandemic are determined not only by the affective status of respondents but also by their age groups. In a sample of adolescents (16-18) and young adults (19-24) reporting a history of affective/anxiety disorders, average stress levels at the time of the cancellation of restrictive measures (period 3) were higher than at the time of the introduction of epidemiological restrictions (period 1). Among young and adult respondents who denied having mental disorders, stress levels at the final stage of the restrictive measures (period 2) were lower than those initially identified.
The differences in stress levels and their dynamics in respondents who confirmed or denied the presence of affective/anxiety disorders (taking nosology into account) were linked to their behavioral patterns. An increase in time spent searching for information about the pandemic is known to be directly associated with increased anxiety (Nekliudov et al., 2020). At the same time, the usage of hand hygiene can be associated with the reduction of anxiety and stress associated with COVID-19 . In our sample, the history of anxiety disorders was associated with frequent searching for news about the pandemic. At the same time, the history of bipolar or depressive disorders was associated with less searching for news about COVID-19 in the media. Most notable is that respondents who reported a history of depressive disorders practiced the fewest protective behavioral strategies. Thus, the relatively favorable course of stress reactions in respondents with a history of bipolar disorders, on the contrary, was linked to a slight reduction in their protective behavioral patterns in relation to coronavirus.
The differences identified in behavior associated with the search for information about COVID-19 and protective measures in respondents from different nosological groups may be seen as a predisposition for a more effective response to stress among respondents self-reporting a bipolar disorder and respondents without mental disorders and less effective response among respondents self-reporting depressive or anxiety disorders. The wider spread of pandemic anxiety known from bipolar disorder literature is unlikely to be associated with the development of severe distress in our sample (Van Rheenen et al., 2020). It is possible that a stressful response to the COVID-19 pandemic may be related not to the intensity of anxiety stress but to a disturbance of an individual's adaptive-compensatory reactions (Sorokin et al., 2021). The different results regarding bipolar disorders in our study and the COLLATE project can also be explained by the use of different psychometric tools (Van Rheenen et al., 2020).
According to our data, this is one of the largest studies of the determinants of stress levels in the Russian population, which took into account the presence of mental disorders. The results of this study formed the basis for the development of algorithms for the diagnosis and therapy of mental disorders registered during the COVID-19 pandemic in Russia . The findings are important for public health to take preventive screening measures among the population to reduce the burden of the COVID-19 pandemic.
Limitations
The study had several limitations. First, it had a cross-sectional rather than longitudinal design, so the information on stress dynamics should be interpreted as a population change in response to the pandemic rather than as an increase or reduction in stress among the respondents over time. Second, data on the psychiatric condition of the subjects were based on their selfreports. According to the literature, this is strongly related to the results of medical history collection but does not enable us to speak about the verified diseases of respondents. Third, the need to comply with quarantine restrictions determined that the only possible format for conducting a study in the initial stages of the pandemic was in the form of an online questionnaire, which also had a number of features: the predominant participation of women in such studies and selection errors for persons who are not active users of the Internet. Fourth, the internal consistency of two subsets of questions about COVID-19 concerns and protective behavior was low. Meanwhile, according to Lee J. Cronbach, the reliability measure could reflect not only the consistency among items in a test but also the agreement among scorers of a performance test and the stability of performance of scores on multiple trials of the same procedure (Cronbach and Shavelson, 2004). In this sense, our results were taken into account as satisfactory and reflecting inter-subjects' diversity of COVID-19 reactions, as well as the differences revealed within periods of the pandemic and served an addition to main psychometric instrument (PSM-25) which demonstrated excellent reliability. Fifth, a number of data obtained in the course of the study, in particular about the specifics of somatic diseases of respondents, their education, family status, and the current level of the epidemic process in the region of their residence, were not taken into account in the analysis in this article, as they require further dynamic study taking into account the protracted nature of the pandemic.
CONCLUSION
Assessment of the population's psychological reactions to the COVID-19 pandemic is a complex task that requires not only consideration of socio-geographical (age, residence) and clinical characteristics (history of affective or anxiety disorders), but also an analysis of the time periods. Individuals self-reporting affective or anxiety disorders tend to respond more emotionally to the pandemic by forming a wide range of anxiety concerns and make less effective use of protective behavioral strategies. As a result, this may determine different trends in stress response: an increase in distress during a pandemic among those who report affective/anxiety disorders and a reduction among those who report no mental disorders. Given the dynamics observed, psychiatric services should be prepared for a greater burden of affective and anxiety disorders after the actual end of the pandemic, especially among young people. Future studies should pay more attention to the secondary mental health effects of the COVID-19 pandemic on the most vulnerable groups.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Independent Ethics Committee in V. M. Bekhterev National Medical Research Center for Psychiatry and Neurology. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
Conceptualization of the study, goals, and aims: GM, NL, and EK. Investigation: EK, MS, OM, GR, and MK. Methodology and project administration: GM, NL, EK, and MS. Resources, writing, reviewing, and editing: NN, GM, and NL. Statistics: TM, DV, and MS. Writing (original draft): MS, EK, TM, and DV. All authors read and approved the final version of the manuscript. | 2022-07-06T13:14:33.588Z | 2022-07-05T00:00:00.000 | {
"year": 2022,
"sha1": "859860a094f2fd27b2ccaabe9c318d376d931314",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "859860a094f2fd27b2ccaabe9c318d376d931314",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227248668 | pes2o/s2orc | v3-fos-license | A Study on the Preparation of Microbial and Nonstarch Polysaccharide Enzyme Synergistic Fermented Maize Cob Feed and Its Feeding Efficiency in Finishing Pigs
1000 g maize cob mixed material was synergistically fermented by adding 2.5% composite probiotics and 0.06-0.08% NSP (nonstarch polysaccharide) enzyme to prepare fermented feed, and its effectiveness as feed for fattening pigs was investigated. The results showed that the appearance, texture, and nutrient quality of maize cobs significantly improved after fermentation, the total number of bacteria was 4.5 × 1010 CFU/g, and the protein content was 7.1%. Compared to the control group, the pigs in the 6% fermented maize cob feed experimental group showed significantly increased daily feed intake, daily weight gain, and nutrient digestion rate (p < 0.05) and reduced feed conversion ratio (p < 0.05). Most indicators including slaughter performance and meat quality significantly improved. In addition, beneficial bacteria including Lactobacillus in the intestines of the finishing pigs significantly increased, and pathogenic bacteria including Escherichia coli in the intestines and feces were found to be significantly reduced (p < 0.05). The intestinal crypt depth, VH/CD ratio, and ileal mucosal immunity of the finishing pigs also significantly improved (p < 0.05). The cytokine content and gene expression of sIgA, IL-8, and TNF-α were found to be significantly increased (p < 0.05). It could be concluded that the addition of 6% fermented maize cob feed to the diets of finishing pigs could promote their growth, improve their production performance and slaughter performance meat quality, and enhance their intestinal microecological balance and immunity.
Introduction
As a main feeding crop worldwide, maize has long been widely used in animal husbandry [1]. Maize cob is the central core after removing kernels from the maize ear. The annual output of corn in China exceeds 200 million tons, of which maize cob production accounts for approximately 10%, ultimately causing an extremely large output that exceeds 20 million tons [2]. With the development of science and technology, the field of maize cob deep processing has expanded continuously, and maize cobs have been processed into series of high value-added products, such as furfuryl alcohol, xylose, activated carbon, and glucose [3][4][5]. Maize cobs have also been widely used to produce ethanol [6][7], manufacture food packaging [8], extract oil [9], produce cultivation material for crops [10], and produce feedstuffs [11][12][13]; thus, maize cobs have high potential and value that should be fully accessed. Studies have shown that maize cobs primarily contain 32-36% cellulose, 35-40% hemicellulose, 17-20% lignin, and a small amount of ash and other components [14][15]. Their crude fiber content is high, and their palatability is poor. As the digestive utilization rate of direct feeding of animals is low, it is rarely used in animal production. Thus, preparing microbially fermented maize cob feeds is an economically feasible approach [16][17].
For many years, probiotics, such as Lactobacillus, yeasts, and Bacillus subtilis, have been widely used in feed fermentation [18][19]. In practice, however, simple microbial fermentation alone causes low protease content, which does not fulfill actual production needs. In addition, antagonism between microbial strains may also exist, which consequently affects the fermentation of products [20][21]. Synergistic microbial fermentation refers to fermentation that is subjected to enzymatic hydrolytic processing in conjunction with some amounts of probiotic bacteria. The addition of enzyme preparations overcomes the issue of insufficient enzyme production during fermentation by a single type of microorganisms and improves the utilization efficiency of feed macromolecules by the microorganisms [22][23]. In addition, a variety of organic acids and aroma substances that were produced by these probiotics during fermentation significantly improve the palatability of feed and regulate the intestinal health of animals [24][25].
Thus, this study combined the advantages of microbial probiotics in improving intestinal health and nonstarch polysaccharide (NSP) enzymes in degrading the principal nutritional components of maize cobs to develop a synergistic microbial fermented feed and investigate its feeding efficiency in finishing pigs. High-quality mixed feed suitable for finishing pigs was developed, thereby transforming waste into a valuable resource, extending the industrial chain of agricultural byproducts and waste materials, such as maize cobs, and providing a basis for its application in animal husbandry.
2.3. Preparation of Fermented Maize Cob Feed. 25 g of bacterial powder was weighed for every 1000 g of material to be fermented. The bacterial powder was then activated by adding warm water at a temperature of 25-30°C and stirred thoroughly. The amount of water used was according to the manufacturer's formulation, and the warm water contained the amount of brown sugar specified in the formulation. The brown sugar was thoroughly mixed with the warm water described above. Raw materials were set up as specified by the formulation, mixed thoroughly, loaded into the fermenter, connected with the activated bacterial mixture, and mixed once again for closed fermentation. Fermentation conditions were as follows: fermentation temperature of 25-30°C and resting time of 5-7 d. The fermented material was suitable for feeding when its color became deeper and darker and developed a clear scent.
Determination of Fermented Feed Product Performance.
The appearance of the product before and after fermentation, microbial strain content, and nutrient composition were analyzed. The microbial strain contents were measured by live bacteria plating. Crude protein, dry matter, crude ash, neutral detergent fiber, acid detergent fiber, crude fat, reducing sugar, calcium, phosphorus, and other nutrient components were analyzed by referring to conventional feed analysis methods [26].
Measurement Indices and Methods for Fermented Maize
Cob Feed for Finishing Pigs. For the growth performance and nutrient consumption, a total of 200 healthy Duroc × (Landrace × Yorkshire) three-way crossbred finishing pigs weighing 60:12 ± 0:75 kg were selected and randomly divided into 4 groups. Each group included 5 repetitions, and each repetition included 10 pigs (of similar weight) that were divided evenly between males and females for 60-120 kg feeding tests. The 4 groups included the control group and 3 experimental groups. The control group was fed a basic diet ( Table 1, according to the NRC (2012) nutrient requirements for finishing pig), and the experimental groups were fed the basic diet supplemented with 4, 6, and 8% fermented maize cob feed-replacing the energy components, such as corn and soybean meal, of the basic diet. The addition of concentration ratios (4%, 6%, and 8%) was based on the addition ratio of conventional fermented feed in livestock and poultry animals and the results of prefeeding in the early stage of this experiment [27][28]. The growth performance and nutrient digestibility of each group of finishing pigs were analyzed in a finishing pig house with relatively stable and controlled conditions. Further, the growth performance indices of the average daily weight gain, feed intake, and feed conversion 2 BioMed Research International ratio in finishing pigs that were fed fermented maize cob, rather than conventional feed, were investigated. For the evaluation of the nutrient digestibility effects, titanium dioxide (TiO 2 ) was used as an external index for digestion tests. 0.1% TiO 2 was added to the experimental groups that were fed fermented feed, and both feed and fecal samples were collected from each group after prefeeding for 5 days. Gross energy (GE), dry matter (DM), crude protein (CP), ether extract (EE), crude ash (Ash), calcium levels (Ca), and phosphorus levels (P) in the samples were analyzed and determined to evaluate the effects of fermented maize cob feed on nutrient digestibility in finishing pigs. Ti contents were determined as described by Morgan et al. [29]. Lastly, the nutrient consumption rate (%) was equal to the following: ½1 − ðTi contents in feed samples/Ti contents in fecal samplesÞ × ðnutrient contents in fecal samples/nutrient contents in feed samplesÞ × 100.
For the slaughter performance and meat quality, after testing, five finishing pigs from each experimental group were randomly selected for slaughter, and the slaughter performance, meat quality, muscle fat levels, and fatty acid levels of the pork were evaluated. Each index was measured as described by Panella-Riera et al. [30]. All pigs were slaughtered with a normal humane procedure, and all efforts were made to minimize suffering. The pigs were euthanized by electric shock and then dehaired, and the carcasses were dissected.
For intestinal performance and ileal mucosal immunity, after testing, fecal samples were aseptically collected from the rectum of the finishing pigs before slaughter, and the total bacteria and E. coli counts of the collected samples were measured (plate colony counting method, the same as below). The morphology of the intestinal tissue was examined at the time of slaughter, and the duodenum, jejunum, ileum, and cecum were isolated. Of these, the duodenum, jejunum, and ileum were stored in 10% neutral formalin buffer solution and frozen sections were prepared as described by Hu et al. [31]. Hematoxylin-eosin (HE) staining was then performed. The villus height (VH), crypt depth (CD), and VH/CD ratio were calculated. Chyme from the ileum and cecum was collected for microbial flora determination. Approximately 1.5 cm of the proximal distal ileum was treated with normal saline, frozen in liquid nitrogen, and stored in a -80°C freezer. The double-antibody sandwich enzyme-linked immunosorbent assay (ELISA) method was used to measure the contents of secreted immunoglobulin A (sIgA), interleukin-8 (IL-8), and tumor necrosis factor-α (TNF-α) in the intestinal tissue supernatant. All kits were purchased from Nanjing Jiancheng Bioengineering Institute, and a real-time PCR assay was used to measure the mRNA expression of IL-8 and TNF-α. The real-time PCR reaction composition was as follows: 10.0 μL of 2× Master Mix (Beijing Tiangen Biochemical Technology Co., Ltd.), 0.5 μL of primer F (10 μM), 0.5 μL of primer R (10 μM), q.s. to a total volume of 20 μL with diethyl pyrocarbonate-(DEPC-) treated water, and 1.2 μL of cDNA (30 ng/μL). Primer pairs for each factor are shown in Table 2 [32]. The reaction procedure was as follows: 95°C for 30 s, 95°C for 5 s, and 60°C for 35 s; 40 cycles. Further, the melting curve analysis was based on automated fluorescence measurements as follows: 60°C for 60 s and 95°C for 15 s (60°C-95°C). The 2 -ΔΔCT method was used to calculate the expression of each factor in the experimental groups with the addition of different amounts of fermented maize cob feed relative to the control group. 18S RNA was used as the internal reference gene [33].
2.6. Statistical Analysis. All the data were sorted by Excel software, and then, one-way ANOVA program in SPSS software was used for single-factor ANOVA analysis, Waller-Duncan program for multiple comparison between groups. All data in test results were represented by the mean ± SD, and means were considered different when p < 0:05.
Measurement Results of Fermented Maize Cob Feed
Product Performance. Results of the performance measurements of maize cob before and following fermentation are shown in Table 3. Maize cobs exhibited a deeper color following fermentation, which produces a transparent wine-like and lactic acid scent with a clear color and texture change. Compared to the levels before fermentation, the contents of crude protein, calcium, and phosphorus in maize cobs increased after fermentation, whereas dry matter, crude fat, neutral detergent fiber, acid detergent fiber, crude fat, and reducing sugar contents decreased. Compared to single bacterial fermentation, synergistic microbial fermentation of maize cob with the addition of NSP enzymes significantly increased each microbial strain population, fiber degradation, and protein contents (p < 0:05); the residual contents of dry matter, crude ash, and reducing sugar decreased. The results showed that the addition of NSP enzymes could increase the utilization efficiency of maize cob macromolecules and significantly improved the protein conversion efficiency. Furthermore, this addition also provided more energy for microbial strain growth and significantly increased the number of each microbial strain.
Effects of Fermented Maize Cob Feed on Growth
Performance and Nutrient Consumption of Finishing Pigs. Table 4, the daily feed intake significantly increased in every experimental group as the amount of fermented feed increased (p < 0:05), compared to the control group. Daily weight gain significantly increased (p < 0:05), whereas the feed conversion ratio was found to be reduced. When the amount of additive was 6%, the daily weight gain and feed conversion ratio was significantly improved compared to that of the control and 4% additive groups (p < 0:05), with no significant differences compared to the 8% additive group. Table 5 shows that gross energy, dry matter, organic matter, crude protein, crude fat, calcium levels, phosphorus levels, and other nutrient digestibility indices of fermented maize cob feed significantly increased compared to the control group (p < 0:05). Among these indices, when the amount of additive was 6%, many nutrient digestibility Note:"-" means no value, all the data in the table were the determination results of corncob before and after fermentation at different batches, and five batches were determined. In the shoulder markers of peer data, The same letters indicate no significant difference (p > 0:05), and different letters indicate significant difference (p < 0:05), the same as below. BioMed Research International indices were significantly higher than in the 4% additive group (p < 0:05), with no significant differences compared to the 8% additive group.
Effects of Fermented Maize Cob Feed on Slaughter
Performance and Meat Quality of Finishing Pigs. Table 6 shows that the addition of different proportions of fermented maize cob feed exhibited positive effects on most indices, such as slaughter performance, meat quality, muscle fat levels, and fatty acid levels, compared to those of the control group, but some indices (dressed weight, dressing percentage, tenth rib fat thickness, and percent lean) were not significantly improved with increased additive. Overall, the 6% additive condition significantly improved the slaughter performance and meat quality of finishing pigs (p < 0:05).
Effects of Maize Cob Fermented Feed on Intestinal
Performance and Ileal Mucosal Immunity in Finishing Pigs. Figure 1(a) shows that different proportions of fermented maize cob feed did not significantly increase the villus height of finishing pigs compared to the control group, although crypt depth (Figure 1(b)) and VH/CD ratio (Figure 1(c)) were found to be significantly decreased and increased, respectively, as the additive amount increased (p < 0:05). Further, intestinal morphology and structure were improved. The addition of fermented maize cob feed exhibited significant effects on the feces and intestinal microorganisms of the finishing pigs (p < 0:05). Compared to the control group, the microbial florae in the feces of finishing pigs increased and the number of E. coli significantly decreased (p < 0:05) as the amount of additive increased (Figure 2(a)), while Lactobacillus content in the ileum and cecum significantly increased and the number of E. coli decreased (p < 0:05) (Figures 2(b) and 2(c)). Fermented maize cob feed also significantly improved the ileum mucosal immunity of the finishing pigs (p < 0:05) (Figure 3). Compared to the control group, the cytokine contents and expression of the corresponding immune factor genes in each experimental group increased as the amount of additive increased-all of which reached significance (p < 0:05).
Discussion
Synergistic microbial fermentation techniques connect the entire process of feed fermentation, processing, and production. The combined action of microbial probiotics and enzymatic hydrolysis technology was the biggest technological breakthrough in feed fermentation and was important for the future development of biological feed [34]. The present study demonstrated that the addition of different proportions of synergistic microbial fermented maize cob feed promoted the growth, nutrient consumption, slaughter performance, and the overall intestinal health of finishing pigs. It could also replace the energy component of the basic diet part in "NRC (2012) nutrient requirements for finishing pig," and these effects were significantly enhanced as the proportion of additive increased until an equilibrium was maintained at a peak value.
The nutrient consumption rate of livestock feed was an important index to measure the digestive utilization of livestock animals and to evaluate the nutritional value of feed [35]. NSP, the principal component of plant-derived cell walls, was not easily digested or utilized by monogastric animals. In addition, water-soluble nonstarch polysaccharides (e.g., arabinoxylan and β-glucan) are highly viscous and can increase the chyme viscosity in the intestines of animals, which blocks interactions between nutrients in feed and digestive juices and affects the digestion of nutrients. The addition of NSP enzymes can eliminate or reduce the adverse effects of NSP [36][37]. NSP enzymes can degrade plant cell walls, cleave internal soluble nonstarch polysaccharides, and promote the release of nutrients bound in cell walls. Reducing the viscosity of the contents of the intestinal tract was beneficial for interactions between nutrients and enzymes and improved digestion rates [38]. On the other Control group 4% addition group 6% addition group 8% addition group BioMed Research International hand, composite probiotics in fermented feed can decompose macromolecular substances that were difficult for livestock and poultry to digest into small molecule nutrients, such as small peptides, glucose, amino acids, and vitamins, which were easily digested and absorbed by the body of the animal [39]. Additionally, lactic acid and ethanol that were secreted by probiotics during their growth also improve the palatability of feed and stimulate increased feed intake by pigs [40]. In the present study, probiotics, such as lactic acid bacteria and yeasts contained in the fermented maize cob feed, underwent synergistic fermentation with NSP enzymes. Finally, the nutrient consumption rate and nutrient digestion and absorption in the fermented maize cob feed experimental group of finishing pigs significantly increased, and the production performance of the finishing pigs (feed conversion ratio, daily weight gain) also significantly increased.
Slaughter performance and meat quality were the main indices that evaluate the performance of livestock products [41]. Adding fermented maize cob feed promotes absorption of dietary nutrients, accelerated growth rate, and increased the dressed weight and dressing percentage of finishing pigs. However, when a high proportion (8%) of fermented maize cob feed was added, the energy intake became too high, and rapid back fat accumulation occurred, thus reducing percent lean and slaughter quality of the finishing pigs. Studies had shown that the metabolites of microorganisms can increase the cytoplasmic concentration in pork cells, increase the ability of the pork to absorb water, and reduce the drip loss of pork [42]. The present study also found that fermented maize cob feed can reduce the drip loss of pork, which may be due to enhanced enzymatic hydrolysis of nutrients in the fermented feed and increased production of metabolites from microbial growth. Moreover, the probiotics in fermented found relationships between pork eating quality (tenderness, juiciness, and flavor) and muscle fatty acid composition [43][44]. In the present study, as the content of saturated fatty acids and monounsaturated fatty acids in each fermented maize cob feed experimental group increased, pork eating quality increased to some degree. Conversely, increased polyunsaturated fatty acid content led to decreased eating quality.
The balance of the microecological system in animal intestines plays an important role in improving growth rate, promoting immune system development, maintaining normal immune function, defending against pathogen invasion, and reducing disease occurrence in animals [45]. The present study found that the addition of fermented maize cob feed to the diet of finishing pigs resulted in abundant probiotics that rapidly occupied the ecological niches in the intestines of the pigs, thereby establishing growth dominance, significantly increasing the number of lactic acid bacteria in the intestines, and reducing the amount of E. coli in the intestines and feces. This overall improved the microecological balance in the intestines of the pig and improved physical immunity.
Morphological structural integrity, villus height, and crypt depth of the small intestines were important criteria to measure the health of the animal and its ability to digest and absorb nutrients [46]. Studies of the intestinal surface suggested that longer villi were related to the improved ability of the small intestines to absorb nutrients, shallower crypt depth related to the improved small intestinal secretion activity, and greater villus height/crypt depth ratio related to the larger intestinal lining area and higher digestive capacity [47]. In the present study, fermented maize cob feed con-tained yeast, of which its cell wall contains β-glucan and mannan, which reduce the binding of the gastrointestinal tract mucosa of pigs to antigens via the adsorption, phagocytosis, destruction, and absorption of invading bacteria. This consequently protects the gastrointestinal tract mucosa from damage, protects the morphological structural integrity of the small intestines, and promotes small intestine development and significant improvement in crypt depth and the VH/CD ratio.
sIgA was an important effector molecule in the intestinal mucosa that can regulate intestinal microorganisms and neutralize toxins [48]. Cytokines-such as IL-8 and TNFα-were important signaling molecules in the immune system [49]. Probiotics stimulate the expression and secretion of proinflammatory factors in the intestinal immune cells of pigs and regulate the host immune function towards a more stable state [50][51]. In the present study, fermented maize cob feed significantly increased the content of sIgA cytokines and IL-8 and TNF-α immune factors in the ileum of finishing pigs, which may be due to the entry of probiotics in maize cobs into the intestines of piglets as antigens to stimulate mucosa and promote B cell proliferation and differentiation in plasma cells, which thus secretes a large amount of sIgA to improve mucosal immune function and improve disease resistance. On the other hand, the added exogenous microorganisms were recognized by the body of the animal, which stimulates the mucosa to produce a mild inflammatory response. This increased the expression of proinflammatory factors in small intestinal mucosae, which thus increased anti-inflammatory factors in the intestines, enhanced the immune function of the ileal mucosae, and improved the anti-infective capacity of the body.
However, the addition of more probiotic-fermented feed did not always lead to improvements, since an optimal dose exists [52,53]. The present study found that the addition of beyond 6% fermented maize cob feed slowed increases in growth performance and other growth-and production- BioMed Research International related indices in finishing pigs. The basic reason behind that is that as the amount of probiotic-fermented feed increased, the stress on the intestines of the pig increased and immune factor content and expression reduced, which may eventually lead to faster fat deposition rate and lower meat quality. Given the economic benefits of overall feeding costs, the addition of 6% fermented maize cob feed was selected as an optimal dosage to feed finishing pigs.
In the present study, a combined probiotic and NSP enzyme fermentation technique was employed to prepare a fermented maize cob feed, which enhanced the degradation of maize cob composition and improved its nutritional value. The addition of 6% fermented maize cob feed to the diets of finishing pigs promoted their growth and improved their production performance, slaughter performance, and meat quality. In addition, their intestinal microecological balance was improved and their immunity was enhanced, which provides a theoretical basis and practical examples for comprehensive utilization of maize cobs and the development of microbe-fermented feed preparation techniques.
Data Availability
All data are fully available without restriction, and all relevant data are within the paper.
Conflicts of Interest
The authors declare that there is no conflict of interests regarding the publication of this paper. The financial allocation is 50000 RMB, and the project is still in the study. The financially supporting body was the Key Laboratory of Fujian Universities Preventive Veterinary Medicine and Biotechnology at Longyan University, China. The first author (Biaosheng Lin) hosted the Science and Technology Planning Project of Fujian Province, China (grant number 2020N01010238) in April, 2020. The project name was "A study on the preparation of synergistic microbial fermented maize cob feed and its feeding efficiency in finishing pigs." The financial allocation is 150000 RMB, and the project is still in the study. The financially supporting body was Fujian Science and Technology Department, China. | 2020-11-19T09:15:35.915Z | 2020-11-13T00:00:00.000 | {
"year": 2020,
"sha1": "0737f4709101322b547dd5df3dbb83304b9c1d2f",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/bmri/2020/8839148.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc3461a7be8e5c9254149843262787f80e508177",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
52177127 | pes2o/s2orc | v3-fos-license | Effectiveness of endoscopic septoplasty in different types of nasal septal deformities: our experience with NOSE evaluation
SUMMARY Septal deviations are the most frequent cause of nasal obstruction, and represent a common complaint in rhinologic practice. Since the first description of Lanza et al. in 1991, the use of the endoscope for the correction of septal deformities is increasingly more frequent. The purpose of this study is to evaluate the effectivenes of the endoscopic septoplasty for the correction of each of the 7 types of septal deformities according to the Mladina’s classification. A retrospective chart review was performed in 59 consecutive patients presenting to our Department for Endoscopic Septoplasty from February 2012 to August 2014. For each deviation, descriptive statistics (mean and standard deviation, significant increase/decrease) was used to asses the corrective capacity and time-dependent effects at follow-up. This study shows that the corrective power of endoscopic septoplasty is different according to the type of deviation. To our knowledge this is the first study that evaluates the corrective capacity of this technique for each deviation by analysing pre- and postoperative objective outcomes as well as subjective outcomes gathered from the validated NOSE questionnaire. Even if endoscopic septoplasty may now be considered a reliable alternative to the classic technique, it is essential to identify the right deformity preoperatively in order to provide the correct therapeutic choice.
Introduction
Septal deviations are the most common cause of nasal obstruction, representing a common compliant in rhinologic practice. Since its introduction, procedures for correction of nasal septal deformities have undergone several modifications, from radical septal resection, to possible preservation of septal framework and nasal mucosa. Frequently, septal deformities can be associated with lateral wall diseases or may be the cause of them. A significantly deviated nasal septum has been implicated in epistaxis, sinusitis, obstructive sleep apnoea and headaches attributable to contact point with structures of lateral nasal wall 1 . For this reason, correction of septal deformities cannot be separated from treatment of disorders of the lateral wall when present. Thus, endoscopic septoplasty is a useful technique for treating symptomatic deformities, but also for improving intraoperative surgical access to lateral nasal wall surgeries (e.g. dacryocystorhinostomy, functional endoscopic sinus surgery) 2 3 . Since the first description by Lanza et al. in 1991, the use of the endoscope for the correction of septal deformities is increasingly more frequent 4 . In the literature there is an increase of the consensus in favour of endoscopic septoplasty compared to a conventional approach. However, to date, no author has focused attention on the effectivenes of endoscopic correction, considering all types of septal deformities. More than 20 years ago, Mladina published a systematic classification of septal deformities, precisely defining clinical findings at the nasal septum, and proposing seven different types of deformity 5 6 . The purpose of this study is to evaluate the effectivenes of the endoscopic septoplasty for the correction of each of the 7 types of septal deformities according to Mladina's classification.
Materials and methods
A retrospective chart review was performed in 184 consecutive patients presenting to our Department for endoscopic septoplasty during a 30-month period (February 2012 to August 2014). The patients were 22 females and 37 males with a mean age of 34.9 years, ranging from 18 to 69. Inclusion criteria were as follows: at least 17 years old, septal deformity with nasal obstruction, persistent symptoms after at least a 4-weeks of therapy including topical nasal steroids in combination or not with antihistamines. Patients with sinonasal malignancy, being in need of nasal surgery other than septoplasty (such as functional endoscopic sinus surgery -FESS -, nasal valve surgery, turbinate surgery etc.), sinonasal infections, sinonasal inflammatory disease, were excluded from the study. Given the presenting symptoms of patients that may suggest some forms of rhinosinusitis (chronic or acute recurrent forms), all patients were preoperatively evaluated by paranasal sinus computed tomography (CT) (120 kV, 215 mA s, 1 mm slice thickness). Among the 184 patients studied, 125 were excluded for the presence of radiological signs of chronic rhinosinusitis with some anatomical variants as follows: inferior turbinate hypertrophy in 93% of cases, middle turbinate pneumatisation in 37% of cases, uncinate process pneumatisation in 8% and dysventilated sinuses in 60% of cases. Therefore, 59 patients (32%) fulfilled the inclusion criteria for the present study. The most frequent symptoms encountered were nasal obstruction in all cases; facial pain in 27 cases and postnasal drip and headache in 7 cases each. All patients were submitted to allergic evaluations with skin prick tests for inhalants. The degree of septal deviation was calculated using OSIRIX ® Software (Pixameo SARL, Bernex, Switzerland, 2003-2014). The angle defined by a line passing through the most deviated point and a line perpendicular to the floor of the nose was calculated to determine the degree of the septal deviation ( Fig. 1). Moreover, nasal spaces were directly assessed by nasal endoscopy in all cases. Using these examinations, we were able to stratify the patient cohort into seven groups based on Mladina's classification of nasal septum deviation (Table I).
All patients included in our study underwent endoscopic septoplasty according to the technique described herein. Unilateral vertical septal ridge in the valve region that does not reach the valve itself; it does not change the physiologic valve angle (15%) and therefore usually plays just a mild role in the nasal pathophysiology Type 2 Unilateral vertical septal ridge in the valve region that touches the nasal valve, thus reducing the physiologic valve angle (15%) Type 3 Unilateral vertical ridge that is located more deeply in the nasal cavity, opposite the head of the middle turbinate Type 4 Bilateral deformity consisting of type 2 on one side and type 3 on the other Type 5 Almost horizontal septal spur that sticks laterally and deeply into the nasal cavity. The opposite side of the nasal septum is always flat Type 6 Massive unilateral intermaxillary bone wing with a "gutter" between it and the rest of the septum on this septal side. On the other septal side, there is an anteriorly positioned basal septal crest.
Type 7
Very variable combination of the previous types The procedure was performed under general anaesthesia. The septum was injected with 1% xylocaine in 1:20,000 epinephrine on the convex side of the septum using a 0° rigid 4 mm Hopkins Rod Lens endoscope. In Mladina's type 5 and 6 (Cottle's area IV and V) deformity, a horizontal hemitransfixation incision was made, parallel to the nasal floor on the apex of the spur to expose the most deviated part (Fig. 2a). A submucoperichondral flap was raised using a Cottle elevator under endoscopic visualisation to expose the underlying bone at the most deviated part. To avoid contralateral mucosal damage, careful submucoperichondral dissection on the opposite side was performed using a Cottle elevator. Flaps were elevated superiorly and inferiorly to expose the underlying bony or cartilaginous spur (Fig. 2b). The bony protrusion was removed using a chisel placed on the base of the spur. In Mladina types 2, 3 and 4 deformities (Cottle's area I, II, III), we performed an "endoscopic assisted septoplasty". A vertical incision was made on the concave side of the septum to expose the abnormality at the bony cartilaginous junction. The initial mucoperichondrial flap was elevated using Freer's elevator and nasal speculum. Further elevation was done using 0° rigid nasal endoscope (4 mm), held in the left hand, keeping the tip of the endoscope between the mucoperichondrial flap and the septal cartilage (Fig. 2c). The right hand was used for instrumentation. Flap elevation in the correct cleavage plane to minimise bleeding. Exposure was limited to the target area. A subluxated cartilage from the crest was shaved using a No. 15 blade Bard parker knife to resect the excess cartilage inferiorly, without dislocating the vomero-chondral junction (Fig. 2d). In all cases mucosal flaps were repositioned back in place and is fixed using a silastic stent in order to avoid the mucosal damage during packing removal. Nasal packing was placed in both nasal fossae (Merocel, Medtronic, Mystic, CT, USA) and were removed after 48 hours. Patients were usually discharged after 48 hours. All pa- tients received post-operative antibiotic therapy with oral cephalosporin for one week, saline nasal douching and oral steroids with decreasing dosage. The main outcome measure used in the study was the NOSE scale (Nasal Obstruction Symptom Evaluation) including a grading score from 0 to 5 (Fig. 3). All patients were asked to complete the NOSE scale one week before surgery and then at 3 and 6 months post-operatively. Nonparametric analysis (Wilcoxon signed rank test) was used to compare baseline and follow-up NOSE scores. P values < 0.05 were considered statistically significant. For each deviation, descriptive statistics (mean and standard deviation, significant increase/decrease) were used to assess the possibility to correct each type of deviation.
Nasal endoscopy was performed in all patients at given intervals (15 days, 1 month, 3 months, and 6 months after surgery) to assess possible complications.
Results
Mean follow-up time was 6.3 months (range 3-14 months). The patient cohort was divided according to the Mladina classification as follows: type 5 was the most frequent deviation observed (23.7%, 14 cases); type 3 and 6 were also relatively frequent (20.3%, 12 cases and 18.6%, 11 cases, respectively); types 2 and 1 were observed in equal frequency (13.5%, 8 cases and 11.8%, 7 cases, respectively); types 4 and 7 were rare (6.7%, 4 cases and 5%, 3 cases, respectively). The disease-specific QOL scores assessed with the NOSE scale at different intervals of time are detailed in Table II. Compared to baseline, the scores registered at 3 and 6 months after surgery showed significant improvement in nasal symptoms (p < 0.05). The results are shown in Table III. Significant decreases in nasal obstruction, trouble sleeping, snoring and mouth dryness in the morning were observed between the preoperative period and 3 months after septoplasty. On the other hand, no statistically significant differences between the 3 and 6 month scores were observed. The analysis of the NOSE scores for each deformity showed a different corrective power depending on the type of deviation treated. More in detail, the greater corrective capacity was found for the deviation types 5 and 6, and then gradually decreased for the septal deviation types 4, 1 and 7, becoming very limited for types 3 and 2. This trend remained unchanged over time (3 month -6 month follow-up) (Fig. 4). In our series, 1 septal abscess (Mladina type 4) and 1 saddle nose deformity (Mladina type 2) were reported after endoscopic septoplasty. No haematoma, no synechiae, or perforations were observed.
Discussion
Over the years, many surgical techniques for the correction of septal deformity have become diffuse. The concept of septoplasty was firstly popularised by Killian (1904) 7 and Freer (1902) 8 separately more than 100 years ago. In 1947, Cottle defined surgical septoplasty as a treatment to correct nasal airway obstruction, and standardised the technique 9 . This technique has remained largely unchanged up to now. Recently introduced endoscopic endonasal techniques provide better magnification and illumination of the surgical field and can also be used to assist septal surgery 11 . The application of endoscopic techniques for correction of septal deformities was initially described in 1991 by Stammberger. Since that time, surgeons have performed endoscopic septoplasties not only to treat symptomatic nasal obstruction, but also to improve surgical access to the middle meatus as an adjunct to endoscopic sinus surgery (ESS) [10][11][12][13][14][15][16] . Endoscopic septoplasty is now an attractive alternative to traditional headlight approach for septoplasty. Bothra et al. showed better results and fewer complications with endoscopic septoplasty compared to conventional approaches, as endoscopy gave better illumination and improved access to high deviations and spurs 17 . The same opinion in favour of endoscopic septoplasty was expressed later by several authors who compared the two techniques 18 . Gulati et al 19 found that an endoscopic approach to septoplasty simplifies identification of the pathology due to better illumination, improved accessibility to remote areas and magnification, while allowing for limited incision and elevation of flaps without compromising adequate exposure of the pathological site. Paradis et al. 20 compared endoscopic vs classic septoplasty. The authors recruited 63 patients with a septal deviation meeting strict inclusion/exclusion criteria and measures outcomes including surgical time, intraoperative complications and pre-and post-operative data from the Nasal Obstruction Symptom Evaluation (NOSE) questionnaire. There were subjective improvements in nasal obstructive symptoms in both groups, but without significant differences between endoscopic and classic septoplasty. However, objective outcome measures, including operative time and intraoperative complications, were favoured by the endoscopic technique. Therefore, considering these findings and the advantages of endoscopy (e.g., improved visualisation of the surgical field, increased precision and enhanced teaching opportunity), the use of an endoscopic approach for septoplasty is suggested over a traditional technique for correction of septal deviation. While the majority of authors seem to prefer the endoscopic technique, no one has analysed the effectiveness of this procedure in resolving different septal deformity.
To our knowledge, this is the first study that evaluates the corrective capacity of this technique for each type of deviation by analysing pre-and post-operative objective outcomes, as well as subjective outcomes gathered from the validated NOSE questionnaire 21-23 .
Mladina et al. codified a classification for septal deformity based on direct observations of 2589 patients. The authors concluded that almost 90% of subjects showed 1 of the 7 types of septal deformity described 6 24 . We divided our cohort based on this simple and effective classification. By direct endoscopic visualisation and data processing of coronal CT scans, it was easily possible to stratify our sample into each of the seven types described by Mladina et al. 6 .
For those who deal with functional nasal surgery, evaluation of the nasal airflow perception is the most difficult parameter to study. Nasal breathing is a complex function of the nose that may be influenced by various conditions such as humidity, nasal resistance and contact of inspiration air with nasal surfaces. Stewart et al. in 2004 completed the validation of a disease-specific instrument to assess nasal obstruction: the NOSE scale 25 . According to Kahveci et al., who found the NOSE scale a very efficient tool to evaluate outcomes of septoplasty, we adopted this tool to assess the effectiveness of endoscopic septoplasty in different types deviations, comparing outcomes observed preoperatively and at 3 and 6 months post-operatively 21 . Generally, turbinate surgery was not accepted as an exclusion criterion when functional outcomes of septoplasty were evaluated 22 25 26 . However, we preferred to include only patients with septal deviation without any other confounding factors (e.g. inferior turbinate hypertrophy) to evaluate the efficiency of septoplasty. Data analysis from the NOSE score showed a marked improvement in airflow perception in all patients treated. No significant differences were appreciated by comparing the NOSE score at 3 months and 6 months after surgery. According to Skitarelic et al., these findings showed that endoscopic septoplasty is an effective procedure with stable results over time 19 .
What we consider very interesting is that the analysis of the NOSE score for individual septal deformity highlighted a different efficacy of the surgical procedure. In particular, the corrective power seems to be greater for deviation types 5 and 6, gradually decreasing in types 4, 1 and 7 and becoming minimal for types 3 and 2. As already shown by Gupta et al., endoscopic vision allows excellent lighting of the septum in the rear portion (Cottle's area IV, V) and faciltates correction of all deviations in this area. Because deviations type 5 and 6 are located mainly in the posterior areas, this could explain the increased corrective power obtained for these deviations in our sample 27 28 . Nayak et al. reported that about 10% of cases with anterior septal deformity had persistent septal deviation after endoscopic septoplasty. In the same way, we have found greater difficulty in performing endoscopic procedure for deviations in this area (Cottle 's area I, II, III) 11 27 . In these areas, it is difficult to obtain a good endoscopic vision for the lack of support for the endoscope. Moreover, the elastic recoil of the cartilage requires detaching a large portion of the septum and to release it in the caudal portion. Therefore, significant bleeding requiring too frequent cleaning of the endoscope's tip render the procedure difficult in this area. We believe that this may explain the reduced corrective capacity of the endoscopic septoplasty for type 2, 3 and 7 deformities.
Conclusions
This study has shown that the corrective power of endoscopic septoplasty is different according to the type of deviation. Even if endoscopic septoplasty may be considered as a reliable alternative to traditional techniques, it is essential to properly identify the type of deformity preoperatively in order to select the adequate surgical strategy. Long term follow-up and larger series are necessary to more accurately assess the indications and limitations of endoscopic-assisted septoplasty in all types of deviation. | 2018-09-16T05:47:12.689Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "13e3d49787ed07a5a9ee7248f4c71d1747fe6c32",
"oa_license": "CCBYNCND",
"oa_url": "https://www.actaitalica.it/article/download/75/77",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "13e3d49787ed07a5a9ee7248f4c71d1747fe6c32",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235566954 | pes2o/s2orc | v3-fos-license | Bayesian Approaches for Handling Hypothetical Estimands in Longitudinal Clinical Trials With Gaussian Outcomes
Abstract The International Council for Harmonisation (ICH) recently published an E9(R1) addendum that requires the estimand associated with the study objective in clinical trials to be clearly defined. One of the challenges in defining an estimand is the estimand’s handling of intercurrent events (ICEs) that affect the collection or interpretation of the data for the study. Among the strategies for handling ICEs, sponsors may prefer to examine hypothetical strategies that assess the theoretical or attributable efficacy of test drugs or biologic products. We first look at several estimands under different hypothetical treatment conditions of interest. For these estimands, the data after ICEs are ignored and treated as missing. Analyses are carried out with missing data assumptions under missing at random, control-based imputation, and return-to-baseline imputation. With the explicit forms of these hypothetical estimands derived, we investigate Bayesian approaches to obtain corresponding point and interval estimates and propose a Bayesian sensitivity analysis which avoids the information positive problem. The methods are illustrated with applications to three clinical trials.
Introduction
For longitudinal clinical trials with continuous outcomes, a common objective is to estimate treatment effects at the end of these trials while accounting for missing observations. Naïve deletion of patients whose data contain missing values can lead to biased inferences (Little and Rubin 2020). In 2010, a National Research Council panel (NRC 2010) made 18 recommendations that highlight the need to reduce missing data in trial design and to apply proper statistical methods in conducting statistical analysis. The panel also recommended that sponsors carefully define the estimands (the population parameters to be estimated) and their corresponding statistical analysis methods in the study protocol. In an effort to provide more clarity and guidance to address missing data, the recent International Council for Harmonisation (ICH) E9(R1) addendum titled "Estimands and Sensitivity Analysis in Clinical Trials" proposed a structured framework to define estimands and to align planning, design, conduct, analysis, and interpretation (ICH 2019). The addendum provides a description of five attributes that are used to construct an estimand: • The treatment condition of interest to which the comparison is made; for example, the comparison may include treatment interruption and/or addition of other therapy as part of the condition of interest. • The population of patients targeted by the scientific question. • The variable (or endpoint) to be obtained for each patient that is required to address the scientific question. • Specification of how to account for intercurrent events (ICEs) to reflect the scientific question of interest. • A population-level summary for the variable that provides the basis for a comparison between treatment conditions.
Considerations for choosing an estimand should depend on the study objectives. These considerations involve coping with missing data and specifying the strategy to address ICEs that occur during the trial and result in uncollected data or influence the interpretation of the data collected after the ICEs . ICH E9(R1) also distinguishes between ICEs and missing data. The handling of treatment condition and data following ICEs should be addressed in the estimands' definition (e.g., collection and using of data after treatment discontinuation or use of rescue medication). Missing data that occur when data should have been collected based on the definition of the estimand but could not be collected (e.g., an analysis conducted before all subjects have completed follow-up) are addressed in the analysis models (e.g., missing at random (MAR) assumption). ICH E9(R1) proposed five estimand definition strategies to handle ICEs including treatment policy, hypothetical, composite, principal stratum, and while-on-treatment. The choice of a strategy depends on the stage of development and the interests of sponsors, regulators, patients, physicians, and payers (Keene et al. 2020;. From a sponsor's perspective, the hypothetical strategy under several envisioned scenarios can provide biologically important estimates for the active treatment that are not confounded with other treatments taken after ICEs. Our research will focus on the two common hypothetical estimands under the conditions: (1) patients who experience ICEs continue the assigned treatment to the end of the study; and (2) patients who experience ICEs discontinue the active treatment and do not take any other treatment (such as rescue medication). Under the first condition, the treatment effect may be evaluated by using a mixed model for repeated measures (MMRM) analysis under the MAR assumption (Mallinckrodt et al. 2008). For the second condition, a multiple imputation (MI) approach is often used to handle missing data after ICEs under different missing not at random (MNAR) assumptions, including control-based imputation (CBI) (Carpenter, Roger, and Kenward 2013;Liu and Pang 2017;Mehrotra, Liu, and Permutt 2017) and a returnto-baseline (RTB) approach (Zhang, Golm, and Liu 2020).
With the advancement of Bayesian software packages such as PROC MCMC in SAS (SAS 2017), Stan (STAN 2019), and WinBUGS (Lunn et al. 2000), developing Bayesian methods and MI approaches to address missing data is less complicated. Although there are multiple approaches to the hypothetical strategy, this article focuses on the CBI and RTB assumptions and on implementing those assumptions using a Bayesian framework. In addition, we compare this approach to likelihood-based methods and MI. The article proceeds as follows. Section 2 defines the hypothetical estimands explicitly and discusses conventional analysis methods. Section 3 describes Bayesian methods to obtain point and interval estimates. A Bayesian sensitivity analysis is proposed that avoids the information-positive problem as discussed in Cro, Carpenter, and Kenward (2019). Section 4 provides analyses of three clinical trials to illustrate the applications of these methods. Section 5 presents discussion and conclusions.
Hypothetical Estimands, Missing Data Assumptions, and Estimators
Consider a longitudinal trial for two treatment groups. Let Y ijk be the outcome of interest for patient i receiving treatment j at time k; where i = 1, . . ., n; j = 0, 1; k = 1, . . ., T; and T is the end-of-study time point for the primary treatment comparison. Let X i = (x i1 , . . . , x iL ) be a collection of L baseline covariates (e.g., baseline measure Y ij0 before the trial starts). We assume that the expectation of Y ijk is where α jk is the intercept at time k for treatment group j and β k = (β k1 , . . . , β kL ) is a set of slope coefficients at time k.
In addition, we assume that the repeated measures Y ij = Y ij1 , . . . , Y ijT follow a multivariate normal distribution, Y ij ∼ N μ j , , where N is the normal distribution. We further assume that X i are centralized, so the intercept α jk becomes the treatment effect at the mean level of the baseline covariates X (= 0). All the parameters γ = {α jk , β k , } are defined for an ideal trial in which all patients complete the study according to the protocol. When there are patients who deviated from the protocol, we will use these parameters to define the hypothetical estimands.
In longitudinal clinical trials, we encounter a mixture of two types of missing data patterns: intermittent and monotone missingness. Intermittent missingness refers to missing data that occur between observed data while patients stay in the study with their assigned therapy. Monotone missingness occurs when data are observed at all time points until a patient drops out of the study and all follow-up observations for that patient are missing. Common causes of intermittent missing data include missed visits and data collection or processing errors. Frequently, the probability of missing values is likely to be independent of the missing data, making the MAR assumption plausible. Therefore, the intermittent missing values could be handled in analysis models that rely only on observed data. Under the MAR assumption, for example, the intermittent missing data can be imputed first to produce datasets with monotone missingness (see chap. 4 in O'Kelly and Ratitch 2014). The monotone missing data mechanism can be more challenging because patients may discontinue the assigned treatment (an ICE) and their unobserved outcomes may differ from the ones that would have been observed if they stayed on the assigned treatment. Here, we examine hypothetical estimands associated with the monotone missingness pattern.
Hypothetical Estimands
We consider two hypothetical estimands under different envisaged treatment conditions and outcome distributions following ICEs. We are interested in these two hypothetical estimands because they evaluate the effects of the test treatment without confounding stemming from rescue medications that may be given after ICEs in practical clinical trials. For both estimands, the outcomes after ICEs, even collected, are ignored and treated as missing data.
Hypothetical Theoretical Estimand
Consider a hypothetical estimand that measures the theoretical or pure drug effect assuming all patients complete the trials with their assigned treatment. We refer to this hypothetical estimand as the theoretical estimand, which corresponds to the de jure estimand described in White, Joseph, and Best (2020). For this estimand, the data after ICEs are ignored and treated as missing in the analysis models. For estimation of this estimand, one might assume the post ICE data are MAR. For a patient i who has data observed up to time p 1 < p < T and dropout after that, the mean response before and after time p is modeled as μ ijk = α jk + X i β k , j = 0, 1; k = 1, · · · , T (see first row in Table 1).
Note that the MAR assumption here is just one potential option. In general, the missing data mechanism cannot be verified from the observed data. For example, the MAR assumption may be violated if the ICE is a discontinuation of assigned treatment because of adverse events which influence the outcomes.
Hypothetical Attributable Estimands
The hypothetical theoretical estimand addresses a "pure" treatment effect under a hypothetical scenario that patients who Hypothetical attributable (De facto) No treatment CBI: CR, J2R, CIR drop out of the study continue with their assigned treatment. This scenario is unrealistic in many practical situations, because patients who drop out would not continue taking the assigned therapy. A more realistic attributable treatment effect is to obtain the effect for patients who drop out under a "what if " condition where the patients discontinue their assigned therapy and do not start any other therapies. These estimands correspond to the attributable de facto estimands described in White, Joseph, and Best (2020). In placebo-controlled trials, it is reasonable to assume that patients who drop out of the placebo group would have similar outcomes to patients who stayed in the study because there is no biological difference between taking and not taking a placebo. Thus, the MAR assumption can be applied to patients who drop out of the control group. Carpenter, Roger, and Kenward (2013) proposed three approaches to handle the missing data in the active drug group, based on different CBI assumptions. The three approaches are (i) copy reference (CR), which replaces the expectation profile of patients receiving the active drug who drop out with the expectation profile of the control group at all time points including times before the ICEs; (ii) jump-to-reference (J2R), which replaces the expectation profile after a patient drops out with that of the control group only for post-dropout times; and (iii) copy increments in reference (CIR), where the increment mean change after dropout is the same as the increment mean change of the control group. Using the notations in Model (1), the missing data assumptions for the CR, J2R, and CIR on the mean profile μ i1k at each time point k for a patient in the active drug group who drops out after time p (i.e., with observed data up to and including time P) are defined in Table 1. The RTB is a different possible assumption for handling missing data. RTB assumes that the mean response after a patient drops out will be the same as the their mean at baseline (Zhang, Golm, and Liu 2020). The key assumption of RTB is that all treatment effects (in both the active drug and control groups) that occur before discontinuation would disappear by the primary analysis time point. For example, this may be reasonable in therapeutic areas where standard care medication is used as control and discontinued patients may start rescue medication, thereby the assumed attributable effects without taking any rescue medication may return to baseline after discontinuation for both groups. In situations where Y ijk represents the change from baseline, its expected value is equal to 0 after dropout. Table 1 summarizes the assumptions about treatment condition and the mean profiles for missing data for these hypothetical estimands.
MI-Based Methods
MI techniques provide an intuitive approach to coping with missing data by explicitly specifying imputation models under various assumptions. Their ability to allow separate imputation and analysis models offers flexibility that is highly valued in defining and handling ICEs for hypothetical estimands. Based on Model (1), a MCMC-based imputation may apply data augmentation sampling procedure iteratively and draw samples of (i) the missing data conditional on the parameters and (ii) the model's parameters conditional on the complete data (Tanner and Wong 1987). Upon convergence of the MCMC algorithm, this process produces the posterior samples for the parameter {α, β, } in Model (1) under MAR. The missing data are imputed multiple times to get complete datasets. The conventional MI-based estimators can be obtained as combining the analysis results from those imputed complete datasets using Rubin's rule (Rubin 1987).
Under the multivariate normal model, the response vector Y ij can be partitioned into the observed part, Y o ij , and the missing oo om . For the hypothetical attributable estimands under CBI or RTB missing data approaches, the imputation for the missing data can be obtained using posterior samples of the parameters {α, β, } under MAR, and then impute the missing data using the mean profiles of CR, J2R, CIR, or RTB as specified in Table 1. The MI-based estimators can be obtained by combining the results from each imputed complete datasets using Rubin's rule. For the MI-based estimators under the MAR, CR, J2R, and CIR assumptions, SAS macros are available from the DIA missing data working group (https://www.lshtm.ac.uk/research/centresprojects-groups/missing-data).
The conventional MI approach with common combination rules, however, can produce biased variance estimates when the imputation and analysis models are uncongenial (Meng 1994). The MI is congenial under the MAR assumption; therefore, the conventional MI analysis for the hypothetical theoretical estimand under MAR produces appropriate sampling error and interval estimates with nominal coverage. The MI is not congenial for the CBI and RTB methods. The conventional MI can overestimate the variances compared to the variability which is expected if the trial and the analysis were performed again. This overestimation may lead to overly conservative analyses (Ayele et al. 2014;Zhang, Golm, and Liu 2020). One approach for handling the variance estimation for such estimators where uncongeniality occurs is to use bootstrapping (Bartlett and Hughes 2020). Alternative approaches based on maximum likelihood and delta methods were also previously proposed (Lu 2014;Tang 2015;Liu and Pang 2016).
Likelihood-Based Method
In a likelihood-based approach, the hypothetical parameters as defined in Model (1) are first estimated from a likelihood-based method, and then the treatment effects for the hypothetical attributable estimand under different missing data assumptions are derived from the parameters of Model (1). The treatment effect for the MAR-based analysis is estimated from the parameters of Model (1) For CBI-based analyses, let f jp be the proportion of patients in group j who drop out at time p, and let f jT = 1 − T−1 p=1 f jp be the proportion of patients in group i who complete the trial. The average effects at time point T for the active treatment group (evaluated at X = 0), under different CBI assumptions, are derived by Liu and Pang (2016) as follows: where α 1k and α 0k are treatment effects at time k for the active drug and control groups, respectively; α o 1p and α o 0p are subvectors of first P elements in α 1 = {α 1k } and α 0 = {α 0k }; and is the element of the subvector corresponding to time point T. The treatment differences are Note that the dimension of the covariance matrices mo and oo varies over the missing data pattern (depending on the length of the observed vector) for p = 1, . . .T − 1 in Equations (2) and (3).
Similarly, we can construct estimands under the RTB assumption.
The point estimates for MAR, CBI, and RTB estimators can be calculated from the MMRM estimates {α jk , j = 0, 1; k = 1, . . . , T} andˆ , and from the observed proportions of dropouts over time {f jk , j = 0, 1; k = 1, . . . , T}. The corresponding sampling variance for an estimateθ under CBI or RTB can be obtained using the variance formula: wheref = f 01 , . . . ,f 0T ,f 11 , . . . ,f 1T is the observed proportions of dropouts over time for both treatment groups. The conditional variance of the first term (var θ |f ) can be computed using variance estimates from the MMRM model (e.g., using the estimated covariance obtained from the LSMEANS statement in SAS PROC MIXED analysis output). The second term can be calculated using the point estimates ofθ and var f = v jkl , where n j is the sample size of group j.
Bayesian Methods for MAR, CBI, and RTB Estimators
The Bayesian approach treats unobserved values as parameters and provides a natural path to estimate the model parameters while accounting for the uncertainty that arises from the missing values. A major difficulty in using Bayesian methods is their computational complexity. This difficulty has been reduced by the advancement of efficient Markov chain Monte Carlo (MCMC) techniques. The abundance of available Bayesianfocused software, such as WinBUGS, PROC MCMC in SAS, and Stan, reduces many implementation difficulties in sampling from posterior distributions under the MAR or MNAR assumption. The Bayesian paradigm enables myriad ways of combining the missing data imputation with sampling from the posterior distribution of the parameters under the various hypothetical assumptions.
Using the notations in Section 2, we partition the response vector Y ij into its observed part, Y o ij , and its missing part, oo om for the MMRM analysis under the MAR assumption. For Bayesian inference of Model (1), we assigned independent conjugate and diffused prior distributions. Specifically, we used diffused Normal priors for α = {α jk } and β = {β k }, and assigned an inverse Wishart prior distribution with T degrees of freedom and an identity inverse covariance matrix for .
Formally, p (α, β, ) , α jk ∼ N 0, 10 6 , β km ∼ N 0, 10 6 , and ∼ Inv-Wishart (T, I). Sampling from the joint posterior distribution of the missing values can be accomplished by iterating through the following data augmentation steps (Tanner and Wong 1987): At each iteration, we can compute the parameter of interest θ MAR = α 1T −α 0T to obtain its posterior distribution. Note that the sampling Step 1 requires the MAR assumption such that the conditional distribution for the missing data only depending on the observed data and model parameters.
Calculate θ
These procedures assume that model parameters { α 1 , α 0 , } and the proportions of patients who drop out at each time point are independent. This is reasonable because the missing data mechanism is ignorable under the assumptions of MAR and that { α 1 , α 0 , } and { f 11 , . . . , f 1T } or { f 0T , f 1T } are a-priori independent (Rubin 1976). Sampling of { f 11 , . . . , f 1T } or { f 0T , f 1T } can be done in the same MCMC process as for the MMRM, or it can be done in a separate step by using matrix manipulation. For example, the matrix call functions in SAS PROC MCMC can be used to implement the computation by using the posterior samples of { α 1 , α 0 , } obtained from MMRM. A SAS macro and STAN code for implementing the CR, J2R, and CIR are provided in the supplemental material online.
Bayesian Sensitivity Analysis
In the CBI estimators described earlier, the expected profile after dropout for a patient in the active treatment group is defined using the parameters of the control group. Therefore, some of the terms cancel out when the treatment difference is calculated, as shown in Equations (2) and (3). The treatment effects are estimated using a plug-in approach based on estimates from the MMRM. The sampling variance for these plug-in estimators is obtained from Equation (4) or by using Bayesian posterior samples as described in Section 3.1. Research and simulations have shown that these sampling variance estimates are less biased than the variances from MI that are calculated using common combination rules (Ayele et al. 2014;Lu 2014;Tang 2015;Liu and Pang 2016).
Under the CBI assumption, the estimates and their sampling variance from this plug-in approach or from Bayesian methods are more efficient than those from MI combining rules. However, the statistical literature cautions that sampling variances of these plug-in estimators can decrease as the proportion of dropout in the active treatment group increases. Cro, Carpenter, and Kenward (2019) proposed a concept of informationanchored sensitivity analysis and demonstrated that the plug-in J2R analysis was not information-anchored sensitivity analysis for the MAR estimand. They showed that the J2R analysis based on MI procedure was information-anchored sensitivity analysis and linked the J2R analysis to a δ-adjusted imputation method. Their approach assumes that the true expected parameters in the active drug group after dropout are "different" from the expected parameters in the control group, and shows that the analysis is similar to using a δ-adjustment for the imputed values under MAR.
Following Cro, Carpenter, and Kenward's approach, we propose a Bayesian sensitivity approach for the CBI estimators. The general idea is to use a prior distribution for the assumed expected parameter for dropouts in the active drug group instead of assuming that this expected parameter equals that in the control group. For simplicity, we describe the method for J2R first. We assume that α m 1T ∼ N α 0T , τ 2 instead of α m 1T = α 0T as in Equation (2). Then the Bayesian J2R estimator becomes Compared to the plug-in J2R estimator, this Bayesian J2R estimator has an extra term that accounts for the potential difference between the expected mean for patients who drop out and the mean of the control group. The estimator can be obtained from the Bayesian MCMC samples as described earlier, with an additional draw from the prior distribution a m 1T −a 0T ∼ N 0, τ 2 . When τ = 0, this analysis is equivalent to the plugin J2R analysis, which is considered as the primary analysis for the hypothetical attributable estimand under J2R. Examining different values of τ > 0 results in a series of sensitivity analyses for this primary estimator. The expectation of θ J2R B is the same as the expectation of θ J2R , but its variance increases with the additional variation from the prior distribution. This sensitivity analysis avoids the information-positive problem that is discussed in Cro, Carpenter, and Kenward (2019). When the plug-in J2R analysis (i.e., τ = 0) is significant, a tipping point can also be found when we increase τ such that the sensitivity analysis becomes insignificant.
Because the focus of treatment comparison is at the last time point, we take a simplified approach to conducting sensitivity analysis for CR and CIR, in which α m 1j = α 0j for j = 2, . . ., T − 1, and we consider the prior distribution α m 1T ∼ N α 0T , τ 2 only for the expected parameter at the last time point, T. It can be shown that the Bayesian estimators for CR and CIR are θ CR 1T − a 0T , respectively, where a m 1T − a 0T ∼ N 0, τ 2 with a given variance of τ 2 .
A few special values of τ may be of interest. When τ = 0, the Bayesian analysis corresponds to the plug-in CBI analysis. Another reference value for τ is V α 1T , which is the estimated standard error (SE) for the expected response from the MMRM analysis. This assumes that the mean profile for patients who drop out of the active treatment varies around the expected control group with a variation equal to the estimated variation for the mean from the MMRM analysis. Because the prior distribution is independent of the data, the variance of the estimated mean effect for the active treatment group, Thus, we consider a third reference value for τ such that V θ CBI Solving this equation, we get We can estimate this τ by using the variance estimates under MAR and CBI, and the observed proportionf 1T . With this value of τ , the variance for the treatment difference of the Bayesian sensitivity analysis would be similar to the one from the CBI analysis based on MI with common combining rules, and it has the same interpretation as a δ-adjusted tipping point analysis (Liu and Pang 2017). We illustrate these methods in three case studies in Section 4.
Antidepressant Trial Data (DIA Missing Data Working Group)
This publicly available dataset from the DIA missing data working group (https://www.lshtm.ac.uk/research/centres-projectsgroups/missing-data) is based on a longitudinal study of an antidepressant drug. The study randomized 171 patients (one patient with intermittent missing values was removed) to an active test drug (n = 83) and a placebo (n = 88). The primary efficacy is assessed using the Hamilton Depression 17item total score (HAMD-17) in terms of change from baseline at week 6. The HAMD-17 was collected at baseline and weeks 1, 2, 4, and 6. Overall, approximately 24% of patients in the active drug group and 26% of patients in the placebo group dropped out before week 6. We consider an MMRM model to define the hypothetical parameters of the expected change from baseline over time for each treatment group and adjust for the baseline values at each time point. The results from MMRM and CBI using the Bayesian and MI approaches for the treatment difference are presented in Table 2. As expected, the MMRM results from likelihood-based method are very similar to that from the Bayesian analysis with non-informative prior. For CBI estimands, the variance estimates from the Bayesian approach are smaller than those obtained from the MI with common combination rules. Sensitivity analysis is conducted with a few choices of τ values, as discussed in Section 3.2. The estimated SE for the mean HAMD-17 change from baseline at the last time point for the active drug group in MMRM is approximately 0.8. The solved τ = 2.19, 2.97, 2.05 for CR, J2R, and CIR, respectively, using Equation (5). The corresponding Bayesian sensitivity analysis results are shown in the last section of Table 2. The point estimates remain similar to those in the other CBI analysis, but the SE is close to the SE obtained for the MI analysis. For J2R, the Bayesian sensitivity analysis had slightly small variance than that from the MI which made the upper bound of the 95% CI slightly less than 0 as compared to a positive upper bound from the MI analysis. Figure 1 shows the mean and credible intervals with values of τ from 0 to 4 to explore the sensitivity of the CBI assumptions. The results become insignificant when τ is between 3.0 and 3.6 for the J2R, CR, and CIR. These tipping points are higher than the reference values of 0.8 and those calculated from Equation (5), implying that the results are robust against additional variation in the CBI assumptions.
Schizophrenia Trial Data
This dataset was created from a multicenter, randomized, double-blind clinical trial involving patients who were diagnosed as having schizophrenia , and missing values follow a monotone missing data pattern. For simplicity, only data from the active treatment and placebo groups were used, because the test drug showed no efficacy. The study randomized 44 and 76 patients to active treatment and placebo groups, respectively. The primary efficacy is assessed with the Positive and Negative Syndrome Scale (PANSS) total score, which was measured at baseline, on day 4, and in weeks 1, 2, 3, and 4 after randomization. Overall, approximately 18% of the patients in the active treatment group and 25% of those in the placebo group dropped out before week 4. We consider the conventional MMRM model to define the hypothetical parameters of mean change from baseline over time for each group and the slopes for baseline at each time point. The results from MMRM and CBI using the Bayesian and MI approaches for the treatment difference are presented in Table 3. For CBI estimands, the variance estimates from the Bayesian approach are smaller than those obtained from the regular MI.
Using τ as calculated in Equation (5), we present the results for the sensitivity analysis for CBI in the last section of Table 3. Because no results for the CBI analyses are significant, no graph is presented for the sensitivity analysis in this example.
A Subset from an Antidepressant Study
In this example, we took a random subset of 200 patients (100 each in the active drug group and the placebo group) from another antidepressant study. The primary efficacy is assessed using MADRS in terms of the change from baseline at the last time point. The MADRS was collected at seven post-baseline time points. Overall, approximately 15% of the patients in the active drug group and 19% of those in the placebo group dropped out before the end of the study. We consider the MMRM model to define the hypothetical parameters of the mean change from baseline over time for each group and the slopes for baseline at each time point. The results from MMRM and CBI using the Bayesian and MI approaches for the treatment difference are presented in Table 4. Similarly, the CBI analysis based on Bayesian approach have smaller variance estimates than those using regular MI. The estimated SE for the mean MADRS change from baseline at the last time point for the active drug group in MMRM is approximately 0.65. The solved τ = 1.52, 1.95, 1.38 for CR, J2R, and CIR, respectively, using Equation (5). The corresponding Bayesian sensitivity analysis results are shown in the last section of Table 4. We can see that the point estimates remain similar to those in the other CBI analysis, but the SEs are close to the SEs from MI with common combination rules. The results are still significant for CR, J2R, and CIR with the τ values calculated from Equation (5). Figure 2 shows the mean and CI for values of τ ranging from 0 to 13. The results are insignificant when τ is between 10 and 12.5 for the J2R, CR, and CIR. These values are higher than the reference values of 0.65 and those calculated from Equation (5), implying that the analysis results are robust against the additional variation in the CBI assumptions.
Discussion
ICH E9(R1) provides a framework to clarify trial objectives and estimands in handling ICEs and missing data in longitudinal clinical trials. In this article, we focused on two estimands under the hypothetical strategy proposed in ICH E9(R1). These hypothetical estimands may help trial sponsors understand the effects of an active treatment when there is no confounding from rescue medications. Each hypothetical estimand corresponds to a population parameter of interest under different assumptions about the values for patients who drop out before the primary analysis time point. The first estimand corresponds to a pharmacologic effect of the active drug under the hypothetical condition that all patients in the study continue the treatment up to the primary analysis time point. For estimation of this estimand, one might assume the post ICE data are MAR. The second estimand evaluates the attributable treatment effect from the active treatment, assuming that patients who drop out of the study would continue in the trial with the control treatment and have outcomes similar to those of patients in the control group (for CBIs), or patients who drop out would not be taking any medication (e.g., no alternative therapy is available) and the treatment effects prior to discontinuation would be eliminated such that the patients would return to their baseline status. This estimand aims to assess the effect of the test drug without confounding from other medication. Data after ICEs are ignored and assumed to be missing. This estimand is different from the treatment policy estimand where the data after ICEs may be collected and used in the analysis. For the treatment policy estimand, partial data after ICE may be used in the analysis, which will lead to larger SE as compared to the primary estimator of J2R. This is because of the assumption that the "true" mean profile after patients drop out in the active treatment group is equal to the mean profile of the placebo group for J2R. This assumption is strong but possibly conservative. It should be noticed that all assumptions for the missing data (i.e., MAR, CBI, and RTB) are untestable with observed data.
We describe Bayesian methods to implement the analysis for these estimands and missing data handling using available software, such as SAS PROC MCMC and STAN. The methods are applied to three case studies. The results show that the variance estimates from the Bayesian approach are smaller than those obtained from the conventional MI with common combination rules. Another advantage of using a Bayesian approach is the ability to conduct sensitivity analyses for the CBI analysis. The introduction of a prior distribution for the mean parameter for patients who drop out relaxes the assumption that these mean parameters for patients on active treatment after dropout are equal to those of control-group patients. The sensitivity analysis can be evaluated by increasing the variance of the prior distribution, so that the mean for dropouts can vary around the mean parameter. A few reference values may be considered for the additional variability parameter τ in the prior, for example, the square root of the estimated variance from the MMRM analysis, or a value such that this Bayesian sensitivity analysis produces variance estimate for a treatment effect that is similar to the variance obtained from MI with common combination rules. This Bayesian sensitivity analysis also ensures that the variance of the treatment difference will increase as τ increases, avoiding the information-positive problem discussed in Cro, Carpenter, and Kenward (2019).
A possible limitation of the Bayesian analysis is the need to define prior distributions for model parameters. These distributions could have significant influence when the number of units in the trial is small. However, this limitation exists with any "Bayesianly proper" MI procedure. We have illustrated the Bayesian method as primary and sensitivity analyses for the CBI-related hypothetical attributable estimand. A similar approach can be applied to the RTB-related estimand and to other methods, such as carried last expected value forward (Carpenter, Roger, and Kenward 2013). In addition, a δ-adjustment can be considered for the mean of the prior distribution-i.e., replacing a 0T with δ + a 0T .
This allows the assumed mean for patients who drop out to be worse or better than the mean of the control group, depending on the conditions of the trial. In conclusion, the proposed Bayesian methods are flexible tools for handling missing data in longitudinal clinical trials. In this article, we have limited our discussions to continuous endpoints. Further research is needed for other types of endpoints, such as binary, categorical, and time to event. | 2021-06-22T17:54:52.192Z | 2021-05-04T00:00:00.000 | {
"year": 2022,
"sha1": "448b0bce50c76771edcd3aa88afcae02cc91d1dd",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Bayesian_Approaches_for_Handling_Hypothetical_Estimands_in_Longitudinal_Clinical_Trials_with_Gaussian_Outcomes/14535920/2/files/27872123.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "5bd1927932ef91b5a7f0c9ebfdd6f4684dc3f3f7",
"s2fieldsofstudy": [
"Medicine",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
16215709 | pes2o/s2orc | v3-fos-license | Is grazing exclusion effective in restoring vegetation in degraded alpine grasslands in Tibet, China?
Overgrazing is considered one of the key disturbance factors that results in alpine grassland degradation in Tibet. Grazing exclusion by fencing has been widely used as an approach to restore degraded grasslands in Tibet since 2004. Is the grazing exclusion management strategy effective for the vegetation restoration of degraded alpine grasslands? Three alpine grassland types were selected in Tibet to investigate the effect of grazing exclusion on plant community structure and biomass. Our results showed that species biodiversity indicators, including the Pielou evenness index, the Shannon–Wiener diversity index, and the Simpson dominance index, did not significantly change under grazing exclusion conditions. In contrast, the total vegetation cover, the mean vegetation height of the community, and the aboveground biomass were significantly higher in the grazing exclusion grasslands than in the free grazed grasslands. These results indicated that grazing exclusion is an effective measure for maintaining community stability and improving aboveground vegetation growth in alpine grasslands. However, the statistical analysis showed that the growing season precipitation (GSP) plays a more important role than grazing exclusion in which influence on vegetation in alpine grasslands. In addition, because the results of the present study come from short term (6–8 years) grazing exclusion, it is still uncertain whether these improvements will be continuable if grazing exclusion is continuously implemented. Therefore, the assessments of the ecological effects of the grazing exclusion management strategy on degraded alpine grasslands in Tibet still need long term continued research.
INTRODUCTION
Tibet is an important ecological security shelter zone that acts as an important reservoir for water, regulating climate change and water resources in China and eastern Asia (Sun et al., 2012). Alpine grasslands are the most dominant ecosystems over all of Tibet, covering more than 70% of the whole plateau's area and representing much of the land area on the Eurasian continent (Wang et al., 2002). Alpine grasslands in this area are grazed by indigenous herbivores, such as yak and Tibetan sheep. These ecosystems have traditionally served as the principal pastures for Tibetan communities and are regarded as one of the major pastoral production bases in China (Wen et al., 2013a). Alpine grasslands also provide ecosystem functions and services, such as carbon sequestration, biodiversity conservation, and soil and water protection, and are also of great importance for Tibetan culture and the maintenance of Tibetan traditions (Wen et al., 2013b;Shang et al., 2014).
Alpine grasslands in Tibet have been regionally degrading, even desertifying since the 1980s. For instance, from 1981 to 2004 in northern Tibet, which is the main extent of alpine grassland distribution and an important livestock production centre in Tibet, degraded alpine grasslands accounted for 50.8% of the total grassland area, and severely and extremely severely degraded grasslands accounted for 8.0% and 1.7%, respectively (Gao et al., 2010). Grassland degradation may be due to a combination of global climate change, rapidly increasing grazing pressure, rodent damage and other factors (Chen et al., 2014). Nevertheless, overgrazing, which caused by an increase in the population of humans and domestic livestock in Tibet, is widely considered the primary cause of grassland degradation. Overgrazing may result in significant changes to the composition and structure of the plant community including significant decreases in the regenerative ability of the grasslands, decreases in biomass, decreases in the amount of nutrients returned to the soil as litter, and eventually cause grassland degradation (Zhou et al., 2005). Additionally, overgrazing causes an increase in potential evapotranspiration, thereby promoting the warming of local climate and further accelerating alpine grassland degradation processes (Du et al., 2004). Under conditions of overgrazing by livestock, the succession of degraded grasslands can become a vicious circle: overgrazing causes grassland degradation, which facilitates rodent infestation, which further degrades grasslands (Kang et al., 2007).
In an attempt to alleviate the problem of grassland degradation in Tibet, China's state and local authorities initiated a program in 2004 called 'retire livestock and restore pastures' (Fig. 1). As part of this campaign, grazing exclusion by fencing has been widely used as an approach to the restoration grasslands (Wei et al., 2012). Grazing exclusion is an effective grassland management practice that aimed to prevent grassland degradation and retain grassland ecosystem function throughout the world in recent decades (Mata-González et al., 2007;Mofidi et al., 2013). This management strategy is expected to restore vegetation and enhance rangeland health in overgrazed and degraded grasslands characterized by low productivity, low vegetation cover and low biomass in Tibet. This campaign has been in progress for more than ten years, which brings to light the question: is this program successful in the restoration of degraded alpine grasslands? This question has attracted great attention in recent years and has inspired a large number of studies on the effect of grazing exclusion on the alpine grasslands (Wei et al., 2012;Wu et al., 2012;Shi et al., 2013).
Nevertheless, research results with regard to the effect of grazing exclusion on plant biomass and biodiversity were not consistent. For instance, grazing exclusion could result in the improvement of grass cover, species biodiversity and biomass due to the absence of grazing in some degraded grassland ecosystems (Mata-González et al., 2007;Mofidi et al., 2013). However, grazing exclusion may induce a decrease of species richness and biodiversity by the replacement of species that are highly adapted to grazing by strongly dominant competitors that increase in abundance due to grazing cessation, such as certain graminoids (Mayer et al., 2009;Shi et al., 2013). The lack of a consistent response of vegetation to grazing exclusion has been attributed to a broad range of factors that determine whether and how herbivores affect plant communities, include the time of grazing exclusion (Mayer et al., 2009), growing season precipitation (Wu et al., 2012), productivity (Schultz, Morgan & Lunt, 2011), climatic conditions (Jing, Cheng & Chen, 2013 and so on. Therefore, specific studies are crucial to that ecosystems are properly managed and that conservation goals are achieved. In the Tibetan Plateau, although numerous studies exploring the effect of grazing exclusion on alpine grassland ecosystem structure and function have been published in recent years (Wu et al., 2009;Shang et al., 2013;Li et al., 2014;Luan et al., 2014;Zhang et al., 2015), the majority of studies focus on a single alpine grassland ecosystem type and in one experimental site. Little studies focus on the assessment of the effects of grazing exclusion on alpine grassland ecosystem on regional scale (Wu et al., 2013;Wu et al., 2014).
To gain a better understanding of the restoration and management of degraded grasslands in Tibet, studies are needed to investigate alpine grassland vegetation growth and community composition dynamics. Thus, the aim of this study was to investigate the effects of excluding grazing herbivores through fencing on high-altitude alpine grasslands in Tibet, and to assess whether fencing can be used as an effective grassland management tool to restore vegetation in degraded alpine grasslands. Three alpine grassland types and nine counties, which represent the main natural alpine grassland distribution in Tibet, were selected as sampled sites according to the time and range of grazing exclusion. We hypothesized that in the absence of grazing, the vegetation cover, height, the above-and below-ground biomass, species richness, diversity would improve due to the absence of disturbance from herbivorous livestock. In addition, based on different plant species diversity and community structure, vegetation productivity and cover, and environmental conditions, we further hypothesized that vegetation biomass and biodiversity responses to the absence of grazing would differ among different alpine grassland types.
Study area
Tibet is located between 26 • 50 ′ and 36 • 29 ′ N and 78 • 15 ′ and 99 • 07 ′ E and covers a total area of more than 1.2 million km 2 , which is approximately one-eighth of the total area of China (Fig. 2). The main portion of the Qinghai-Tibetan Plateau lies at an average altitude of 4,500 m; it is geomorphologically unique in the world. Because of its extensive territory and highly dissected topography, the region has a diverse range of climate and vegetation zones. The solar annual radiation is strong and varies between 140 and 190 kcal cm −2 in different parts of the region. Annual sunshine tends to increase from the east to the west and ranges from 1,800 to 3,200 h. The average annual temperature is rather low, with a large diurnal range, and varies from 18 • C to −4 • C; the average temperature in January varies from 10 • C to −16 • C; the average temperature in July varies from 24 • C to 8 • C, and decreases gradually from the southeast to the northwest. The average annual precipitation is less than 1,000 mm in most areas of Tibet, and reaching up to 2,817 mm in the east and dropping down to approximately 70 mm in the west (Zou et al., 2002;Dai et al., 2011).
According to the first national survey of Chinese grassland resources, Tibet ranks first among all Chinese provinces and autonomous regions in the diversity of its grassland ecosystems, comprising 17 types of grassland based on the classification system used for the whole country (Gai et al., 2009). Among all grassland types, alpine steppe is the most common grassland type in Tibet; it is composed of drought tolerant perennial herb
Survey design, sampling, and data collection
Since the 'retire livestock and restore pastures' ecological engineering program started in 2004, more than 2.4 × 10 6 ha of alpine grasslands in Tibet have been fenced to exclude livestock grazing. Nine counties in Tibet, in which the extent of fenced area was relatively larger, were selected to investigate the effect of grazing exclusion on plant community composition and biomass in alpine grasslands (Fig. 2). These nine counties represented three of the main natural grassland vegetation types in Tibet, including alpine meadow, alpine steppe and alpine desert steppe (Table 1). In these counties, areas which were fenced over the years of 2005-2007 were chosen as sampling sites in the present study. Since fencing establishment, the fenced grassland has completely been excluded from livestock grazing, while the surrounding grassland continued conventional grazing by yak and sheep around the year. No accurate information, such as timing, intensity, and frequency, on grazing activities and pasture management in the open free grazing grassland, but the actual averaged stocking rate ranges were approximately from 0.16 sheep units hm −2 of the westernmost county to 2.05 sheep units hm −2 in the easternmost county for the study region (Wu et al., 2014). In addition, no specific permits were required for the described field studies and the field studies did not involve protected animals or plants. The enclosed areas were defined as grazing exclusion (GE) plot and the areas outside of fencing were defined as free grazing (FG) plot. Field surveys were conducted during late July to mid-August in 2013; three pairs (fenced versus free grazed) of plots in each site were chosen and surveyed.
At each sample plot, three pairs of 0.5 m × 0.5 m quadrats at each GE and FG treatment sample plots were laid out collinearly at intervals of approximately 20 m. All species within each quadrat were identified and their coverage, density, frequency and natural height were measured. The frequency counts were made by dividing the 0.5 m × 0.5 m frame into 10 cm × 10 cm cells. Within each cell, presence/absence species data were recorded. These data were summed up to calculate frequencies per quadrat (1-100%). The geographical coordinates, elevation and vegetation types for each site were also recorded, and the picture of each quadrat was taken using a digital camera to calculate community cover. Aboveground and belowground plant components were harvested. Aboveground plant parts in the sample quadrat were clipped to the soil surface with scissors and belowground plant parts in the sample quadrat were directly acquired by excavation, including both live roots and dead roots of all plant species. After sun-drying of plant samples in the field, they were brought to laboratory and oven-dried at 65 • C for 72 h to determine biomass.
Monthly meteorological datasets with spatial resolutions of 0.5 • from 2005 to 2013, which generated by Thin Plate Spline (TPS) method using ANUSPLIN software (ERSI, Redlands, California, USA) and the data sources include monthly mean temperature and monthly precipitation data from more than 2,400 well distributed climate stations across China, were derived from the China Meteorological Data Sharing Service System (http://cdc.nmic.cn). The average growing season (from May to September) temperature (GST) and growing season precipitation (GSP) from 2005 to 2013 matched with nine sites' locations were extracted from these meteorological raster surfaces in ArcGIS 10.0 (ERSI, Redlands, California, USA) for further analyses.
Plant community characteristics
Total cover, community vegetation height, and the Simpson index, Shannon index, and Pielou index were used to describe the plant community characteristics of alpine grassland ecosystems. The vegetation total cover was acquired from pictures of each quadrat by using CAN-EYE software (INRA-UAPV, France) and the vegetation height of the community was directly measured as the height of the dominant vegetation within each quadrat. To reveal the variation in community composition characteristics in grazing exclusion process, the Pielou evenness index (E), Shannon diversity index (H), and Simpson dominance index (D) were used to indicate plant community biodiversity changes. The Pielou evenness index reflects allocation information and species composition. The Shannon diversity index, which ranging in theory from 0 to infinity and incorporating both species richness and evenness aspects together, increases as the number of species increases and as individuals are evenly distributed among species. The Simpson dominance index gives the probability of two randomly chosen individuals drawn from a population belonging to the same species, a higher value also indicates a higher diversity.
The following formulas were used to calculate Pielou evenness index (E), Shannon-Wiener diversity index (H), and Simpson dominance index (D): where P i = n i /T, n i is the count of each plant species i in a quadrat, T is the total count of all plant species in a quadrat, in that context, P i is the relative probability of finding the species i in a quadrat. S is the total observed number of species in a quadrat.
Statistical analysis
A paired difference t-test was conducted to test differences in the examined parameters between fenced and grazed plots within each grassland type. Analysis of covariance (ANCOVA) by the general linear model (GLM) was employed to evaluate the effects of grazing exclusion treatment and climatic factors on the plant community and biomass indices in Tibet. In the ANCOVA analysis, fixed factor was alpine grassland grazing treatments (FG and GE), while the covariates were GST and GSP. The two covariates GST and GSP that were used to fit the linear ANCOVA models were not highly interacted with the fixed factor (P > 0.05). Pearson correlation analysis was used to test the relationships between different plant community composition and biomass indices. The least significant difference test was used to compare the means at P < 0.05. All statistical analyses were performed using IBM SPSS Statistics 19 software (SPSS/IBM, Chicago, IL, USA).
Plant community characteristics
Changes in the selected plant community characteristics are shown in Table 2. Compared to the FG plots, vegetation total cover of alpine grassland (alpine meadow + alpine steppe + alpine desert steppe) was 8.83% higher (P < 0.05) and the community vegetation height was 2.65 cm higher (P < 0.05) in the GE plots. However, grazing exclusion did not significantly affect the biodiversity of alpine grassland (P > 0.05), although the Simpson index, Shannon index, and Pielou index in GE plots were 0.03, 0.07, and 0.01 lower than those indices in FG plots, respectively. Among three alpine grassland types, there was no significant difference in the Simpson index, Shannon index, or Pielou index between FG and GE plots in alpine meadow, alpine steppe or alpine desert steppe. Nevertheless, significant difference in total cover was found in alpine steppe and in community vegetation height was found in both alpine meadow and alpine steppe (P < 0.05). The results from ANCOVA analysis demonstrated that grazing exclusion had a significant effect on vegetation cover and vegetation height, but did not affect biodiversity indices, including D, H, and E. For growing season climate factors, GST had a significant effect on D and H, whereas GSP had a significant effect on most plant community characteristic indices (Table 3).
Aboveground and belowground biomass
Grazing exclusion had a significant effect on aboveground biomass of alpine grasslands, the mean value of aboveground biomass of the GE plots were 15.43 g cm −2 , higher than that of (GLM) showing F values and P values of plant community characteristics and biomass indices, which the fixed factor was grazing treatments (free grazing and grazing exclusion) and the covariates were growing season temperature (GST) and growing season precipitation (GSP). P-values below 0.05 are in bold. FG plots (P < 0.05). However, grazing exclusion had no significant effect on belowground biomass and total biomass (aboveground biomass + belowground biomass) (P > 0.05, Table 2). Among the three alpine grassland types, for alpine meadow, there were no significant changes in biomass features 6-8 years after fencing, including aboveground, belowground, and total biomass. For alpine steppe, the aboveground, belowground, and total biomasses were all significantly increased due to grazing exclusion (P < 0.05). Moreover, grazing exclusion led to a significant increase in the aboveground biomass in alpine desert steppe (P < 0.05, Table 2). Statistical analyses from ANCOVA showed that grazing exclusion had a significant effect on aboveground biomass of alpine grassland ecosystems in Tibet (P < 0.01, Table 3). The effects of climate factors on biomass were differ between GST and GSP, which GSP had a significant effect on all biomass indices, but GST had no any effect on biomass of alpine grasslands (Table 3).
Relationship among community characteristics and biomass indices
Correlation analyses showed that total cover was negatively correlated with D, H and E (P < 0.01), and significant positively correlated with aboveground, belowground, and total biomass (P < 0.01). However, community vegetation height was positively correlated with D and E (P < 0.01), and no correlations were found between the community vegetation height and any of the biomass parameters (P > 0.05). The community biodiversity indices D, H, and E were all positively correlated with each other (P < 0.01). In addition, the total biomass was positively correlated with the belowground biomass (P < 0.01; Table 4).
DISCUSSION
Overgrazing due to sharp growth of the human population and of food demand in recent years is a major cause of grassland degradation on the Tibetan Plateau (Wei et al., 2012;Shang et al., 2014). Grassland degradation has significantly altered species composition and decreased productivity in the region (Zhou et al., 2006;Ma, Zhou & Du, 2013). The exclusion of livestock through the use of mesh fencing to create large-scale enclosures has become a common grassland management strategy for restoring degraded grasslands of the Tibetan Plateau in recent decades (Wu et al., 2009;Shi et al., 2013). Is grazing exclusion an effective policy to restore vegetation in degraded alpine grassland in Tibet? In the present study, three alpine grassland types and nine counties were selected as sampled sites according to the time and range of grazing exclusion to investigate the effects of grazing exclusion by fencing on plant community characteristics and biomass in degraded alpine grasslands.
Impacts of grazing exclusion on community characteristics
Vegetation cover is an important index for measuring the protective function of vegetation to the ground. Our study shows that continuous grazing exclusion resulted in a significant increase in the total vegetation cover of alpine grasslands (Table 2). This result is consistent with previous reports, supporting the conclusion that the exclusion of grazing livestock in the degraded alpine grasslands of the Tibetan Plateau exerts a strong effect on ecosystem dynamics by increasing vegetative cover (Wu et al., 2009;Shang et al., 2013). The mean vegetation height of community in GE plots was 6.85 cm, which was approximately 1.63 times that in the FG plots (Table 2). Similar results have also been reported in other studies of alpine grasslands in Tibet (Deléglise, Loucougaray & Alard, 2011;Shang et al., 2013). Increased vegetation cover and height in the Tibetan Plateau after fencing has been reported due to the colonization capacity of the vegetation (Shang et al., 2013) and the prevention of livestock herbivory on forage grasses, especially for graminoids and sedgy species that are palatable to livestock (Wu et al., 2009). Species diversity, indicated by the Pielou evenness index, Shannon-Wiener diversity index, and Simpson dominance index, showed no statistically significant difference between GE plots and FG plots (Table 2). Similar results had also been reported in the steppe rangelands of the Central Anatolian Region in Turkey (Firincioglu, Seefeldt & Şahin, 2007) and in the temperate semidesert rangelands of Nevada in North America (Courtois, Perryman & Hussein, 2004). However, the negative consequences for biodiversity after long-term grazing exclusion have also been found in many types of grassland ecosystems (Schultz, Morgan & Lunt, 2011;Maccherini & Santi, 2012). Therefore, there is no general agreement about the species diversity response to grazing exclusion in grassland ecosystems. On one hand, changes in plant species diversity due to grazing or grazing exclusion depend on resource partitioning and competitive patterns in vegetation; for instance, some species with lower competitive ability are reduced in density or disappear from the plant community entirely because of competition, light resources or nutrient availability (Grime, 1998;Van der Wal et al., 2004). On the other hand, the biodiversity response also depends on regional variation in major habitat characteristics, such as soil fertility, soil water availability, and growing-season precipitation (Olff & Ritchie, 1998;Wu et al., 2012;Wu et al., 2014).
A comparison of the community characteristics among the three alpine grassland types showed that total cover, Simpson index, Shannon index, and Pielou index were not significantly different between FG plots and GE plots in all three grassland types, except for that grazing exclusion resulted in the community vegetation heights increasing by 3.01 cm and 2.74 cm in alpine meadow and alpine steppe (P < 0.05), respectively (Table 2). Furthermore, statistical analyses showed that grazing exclusion had a significant effect on vegetation total cover and vegetation height, but grazing exclusion did not affect biodiversity indices (Table 3). This result indicated that short-term grazing exclusion resulted in increase of vegetation growth, but did not lead to obvious change in community composition in degraded alpine grassland ecosystems. The main differences of the plant community characteristics mainly come from the growing season climate differences of alpine grasslands (Table 3).
Impacts of grazing exclusion on biomass
Biomass is often considered a good approximation of productivity, especially in grassland communities (Chiarucci et al., 1999). The aboveground biomass of the GE plots was 31.82% higher than those of the FG plots (P < 0.05, Table 2). Therefore, the grazing exclusion resulted in obvious improvements in community aboveground biomass of degraded alpine grassland. Previous studies found that grazing exclusion significantly increased the total above-ground biomass of alpine meadows in the Tibetan Plateau, and in the fenced meadow, four functional groups, including the grass species group, the sedge species group, the leguminous species group and the noxious species group showed an increase in biomass, whereas only the forbs species group showed a decrease (Zhou et al., 2006;Wu et al., 2009). Wu et al. (2013 found that grazing exclusion increased total aboveground biomass by 27.09% in the Changtang region of Tibet. Their results are strongly consistent with the results of this study. The distinct and positive effect of grazing exclusion on biomass is mainly attributed to the absence of disturbance from herbivorous livestock (Mata-González et al., 2007;Wu et al., 2009); it may secondarily be attributed to the improvement of soil conditions (soil organic carbon and nitrogen storage, water infiltration rate, basal soil respiration, temperature, and moisture) after grazing exclusion, which favours the regeneration and the development of herbaceous species (Zhao et al., 2011;Mofidi et al., 2013).
Among three alpine grassland types, for alpine meadows, the biomass indices, including the mean values of the aboveground, belowground, and total biomass, tended to be higher in GE plots compared to FG plots, but the difference between then were not statistical significant. Nevertheless, for alpine steppe, the aboveground, belowground, and total biomass were all significantly higher due to grazing exclusion; For alpine desert steppe, only the aboveground biomass was significantly higher in fenced plots. Wu et al. (2013) also investigated the effect of grazing exclusion on alpine grasslands in the same region in Tibet, and found that grazing exclusion tended to increase aboveground biomass 17.80% in alpine meadow, 34.78% in alpine steppe, and 12.99% in alpine desert steppe, respectively; although these biomass values were also not statistical significant different from those of free grazed grasslands. However, grazing exclusion resulted in the improvement of aboveground biomass in whole alpine grasslands (alpine meadow + alpine steppe + alpine desert steppe) across regional scale in Tibet through the results from both Wu et al. (2013) and our study (Table 2).
There are increasing evidences show that precipitation plays a key role in the spatial distribution of species richness and diversity, primary production, and carbon and water cycles of alpine grassland ecosystems in this region (Hu et al., 2010;Yang et al., 2010;Wu et al., 2012;Wu et al., 2013). The precipitation gradients distributions control vegetation growth and community composition were also found in the present study which GSP had significant effect on all biomass indices of alpine grasslands, as well as relative community characteristic indices (Table 3). In fact, the similar results were also reported by Wu et al. (2012) and Wu et al. (2014)) in this region, therefore, these potential shift of GSP in Tibet should be considered when recommending any policies designed for the vegetation restoration of degraded alpine grasslands in the future.
The values of the aboveground, belowground, and total biomass were positively correlated with total vegetation cover in the alpine grasslands of the Tibet (Table 4). In addition, the total vegetation cover of alpine grasslands increased after continuous grazing exclusion (Table 2). Therefore, it is suggested that the higher biomass in GE plots was due to the increased vegetation cover. Other studies demonstrated that the grassland biomass and vegetation cover could simultaneously decrease or increase with grazing or not (Gao et al., 2009;Li et al., 2011). The biomass and vegetation cover simultaneously increased due to grazing exclusion was because of the absence of disturbance from herbivorous livestock (Jeddi & Chaieb, 2010), and also because of changes in plant competition and reproduction (Jing, Cheng & Chen, 2013). Moreover, the higher values of aboveground biomass and coverage of certain dominant species in communities under grazing exclusion would result in changes in species' dominance and community composition (Wu et al., 2013). These results were partially validated and expanded upon in our study which the species diversity slightly declined in GE plots with the increasing of grassland biomass and vegetation cover (Table 2). Furthermore, the vegetation cover was negatively correlated with plant biodiversity indicators, D, H and E (P < 0.01) and the aboveground biomass of alpine grassland was negatively correlated with E (P < 0.01) (Table 4).
CONCLUSIONS
The restoration of degraded grassland ecosystem is a complex and long-term ecological process (Gao et al., 2014;Jing et al., 2014). Six to eight years of grazing exclusion in Tibet has not changed species diversity as indicated by the Pielou evenness index, Shannon-Wiener diversity index, and Simpson dominance index, but has significantly improved total vegetation cover, the vegetation height of community and the aboveground biomass of degraded alpine grasslands. These results demonstrate that grazing exclusion is an effective measure for maintaining community stability and improving aboveground vegetation growth in alpine grasslands. Nevertheless, it is worth mentioning that from the ANCOVA analysis, the growing season precipitation (GSP) had a significant effect on all vegetation indicators, except for vegetation height; but grazing exclusion only significantly affected vegetation cover, vegetation height and aboveground biomass (Table 3). Therefore, the GSP plays a more important role than grazing exclusion in which influence on plant community characteristics and biomass in alpine grasslands. In addition, the improvement of the vegetation cover, height and aboveground biomass due to the absence of disturbance from herbivorous livestock in the present study come from the examination short-term (6-8 years) effects of grazing exclusion, so it is questionable whether these improvements will be continuable if grazing exclusion is continuously implemented. Long term observations may be necessary to assess the ecological effects of the grazing exclusion management strategy on degraded alpine grasslands in Tibet. Thus, there is a need for continued research on the role of fencing on grassland restoration, management, and utilization in future. | 2017-07-21T07:26:04.411Z | 2015-06-16T00:00:00.000 | {
"year": 2015,
"sha1": "2fb528c7219eec4dbf83f493d527bfbd4e85fa34",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.1020",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2fb528c7219eec4dbf83f493d527bfbd4e85fa34",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
7879565 | pes2o/s2orc | v3-fos-license | ANTIPROTEINS IN HORSE SERA IV
In the preceding paper (2) it was shown that the subcutaneous injection of horses with rabbit serum albumin was followed by the production of antibody. This antibody resembled the diphtheria antitoxin and anti-egg albumin formed in horses i• its solubility in water, its limited zone of flocculation with antigen, and its serological behavior toward an anti-antibody serum. I t was also shown that precipitable antibody was not produced following intravenous injections of the same rabbit serum albumin. Nevertheless, it had been found that intravenous injection O f pneumococcaI lung autolysates into horses elicited the formation of antibody to the pneumococcal protein (3), and that this antibody combined with antigen in a typical precipitin reaction devoid of the prezone characteristic of the reactions of the antitoxins and anti-albumins. I t is now shown that intravenous injection of horses with another kind of protein antigen--in this'instance rabbit serum globul in-results in the production of antibody giving a typical precipitin reaction. In addition, intracutaneous or subcutaneous injection of horses with rabbit serum globulins gives rise to antibody of the so called "univalent" (4), soluble, low grade, non-flocculating type.
essentially the same before and after injection, 22.8 per cent and 24.8 per cent d the total, respectively.After intravenous injection with mixed albumin-globulin, the serum of horse 1126 gave a pattern in which the -y-peak was prominent.Neither postlmmunization pattern showed the pronounced peak between the ~-and ~,-globulius which characterized the electrophoretic diagrams of the sera of horses 999 and 1127 after subcutaneous inj ection with albumin or aibumin-globulin mixtures (2).* The first seventeen injections were of 100 rag.globulin each, the remainder 25 nag., with few exceptions.
The dosage was gradually increased from 25 to 200 rag.during the first nine injections and then held at that level.
§ From Table I, reference 2.
[[ Dosage reduced for four injections because of febrile reactions.Dodge of globulin only; an equal amount of albumin was also present.
Examination of Sera for Antibody
Intratenoua Set/ca.--Three and one-half months after the start of the injections, serum from horse 1046 (bleeding Mar.18,1940) showed precipitating antibody.A concentrate of the total globulin was prepared from 1950 ml. of this serum by precipitation with ammonium sulfate mad dialysis in the cold against 0.9 per cent NaCI.The final volume was 500 hal.
Antibody in this concentrate was determined by the quantitative precipitin method (5, 2), with -~-globulin eleetrophoretically separated from normal rabbit serum as test antigen.The results are recorded in Table II.Antibodies to rabbit serum components other than ~-globulin were undoubtedly present, since the total globulins, containing some albumin as well, were used as immunizing antigen.Tests with a-and #-globulins were made, but could not be interpreted since it was difficult to secure enough of these components to ensure at least their ¢lectrophoretic homogeneity.The rabbit 7-globulin, on the other hand, could be obtained in larger quantities.Its essential homogeneity in the Tiselins apparatus was demonstaxtcd by a second mobility determination.Moreover, the results obtained could be compared with those secured with antigens consisting of specific precipitates containing -r-globulin antibody.
Qualitative precipitin tests with the final bleedings of horse 1126 (Table I) indicated that these sera were similar in their behavior to the corresponding bleedings of horse 1046.
Influence of Temperature on the Reactivity of A ntibody.--T.woseries of precipitin determinations were made on whole serum of horse 1046 at the conclusion of the intravenous schedule.One was set up at 0°C. and the tubes were aUowed to stand 3 days in the ice box~ with subsequent washing in the cold) The other series was set up at 37°C. and the tubes were incubated for 2~ hours and centrifuged and washed at the same temperature.The data are given in Table III and plotted in Fig. 1.
Determination of Antibody by Absorption with Specific Precipitates.--Since anti-egg albumin
in the rabbit is known to be a v-globulin (6) and specific precipitates containing such antibody, Mg. antibody N precipitated =. 9.3 (yG N) --10.9 (TG N) ~Iz * All N of the gamma globulin solution ~,as assumed to be antigenically active.:[ HA u horse antibody; 7G N -rabbit electrophoretic gamma globulin N.
§ From analyses on supernatant.
if properly washed, can be obtained free from other serum proteins (7), they provide a convenient source of ~,-globulin, suitable for use as antigen (8,9).The analytical procedure is similar to that of the quantitative agglutinin method (10) : an accurately measured amount of a suspension of the washed specific precipitate, of known N content, is added to duplicate portions of serum.If the serum contains much antibody visible agglutination often occurs after the contents of the tubes are mixed.The tubes are Centrifuged, washed 2 to 3 timcs with 0.9 per cent NaCl, and the precipitates analyzed for N. The excess of N found over that added is taken as antibody N. Absorption with fresh portions of precipitate is continued until no more antibody N is added.Since the antigen is virtually insoluble, inhibition reactions due to excess antigcn are avoided." Bleedings from horses injected by the various routes were analyzed by this method.A carefully washed, anti-egg albumin cgg albumin specific prccipitatc with a high antibody to antigen ratio was used as antigen, with the results given in Table IV.
In a refrigerated centrifuge supplied by the International Equipment Co., Boston, Massachusetts.Sera Obtained after Intracutaneous and Subcutaneous lnjectior~ of Globulins.--Horse1046 was rested for 5 months after the intravenous injections and then received a course of intra.cutaneous injections of the same antigens (Table I).Although a bleeding taken Mar.10, 1941, contained 0.11 mg. of antibody N per 5 ml.when analyzed with particulate -/-globulin antigen as above, it failed to precipitate with any of a number of dilutions of rabbit 3,-globulin solution.This indicated that the antibody was univalent (4) or low grade.
Horse 1046 was next given subcutaneous injections of the rabbit serum giobulins.Bleedings were taken July 2, 1941, and Oct. 28, 1941.The latter bleeding was negative, but the July 2, 1941, bleeding gave slight precipitation with relati .'elylarge amounts of -/-globulin (0.025 to 0.125 rag.N).Since the quantity of antigen used for maximum precipitation was § Analyses on 2.0 ml., recalculated to 5. II Analyses on 3.0 ml., recalculated to 5. about 1.5 times as much as the antibody content per milliliter determined with the specific precipitate (Table IV), it is probable that the observed reaction was due to a minor component such as 0-globulin present as an impurity in the -/-globulin antigen.The bleeding of July 2, 1941, was also tested for low grade antibody which could not be precipitated directly by soluble antigens but which could attach itself to specific precipitates containing fully active, multivalent antibody (of. 4, 11 to 13).Accordingly, duplicate 5 ml.portions of precipitating serum of horse 1046 (bleeding July 25, 1940, after intravenous injections and similar to that in Table HI) were set up at 0 ° in three series: one with added saline, another with 1.0 ral.portions of the bleeding of July 2, 1941, and the third with 4 ml.portions of this bleeding.To each set of mixtures appropriate amounts of antigen were added.The determinations were carried out in the usual manner and are recorded in Table V.The amounts of additional precipitate N due to the coprecipitafion of antibody in the July 2, 1941, bleeding are given in the last column of the table.
The specific precipitate method was used to analyze the sera from the remaining horses injected subcutaneously with rabbit serum globulin (No. 999) or with mixed albumin and globulin (No. 1127).As noted in Table IV appreciable amounts of antibody were present.The direct precipitation reactions with 7-globulin were difficult to interpret, however, since large amounts of antigen were required and the precipitates obtained were relatively small.
Fractionation of Antisera.--Inorder to study the distribution of the various antibodies between water-soluble and water-insoluble fractions of the globulins of the antisera, 34 ml. of the concentrate of the serum of horse 1046 (Mar.18,1940) used in the experiment recorded II were dialyzed against 3 daily changes of 400 ml. of 0.005 ,t phosphate buffer at pH 6.8.The precipitate (A) was centrifuged off and redissolved in saline.The solution gave an immediate precipitate with rabbit -r-globulin.The supernatant from precipitate (A), after addition of salt, was analyzed with a rabbit anti,egg albumin specific precipitate suspension and found to contain 44 per cent of the antibody originally present.By difference, 56 per cent of the antibody had been precipitated on dilution with water, a proportion some-What lower than usual with the water-insoluble pneumococcus anticarbohydrate in the horse (14).
Another fractionation was carried out with a late bleeding (July 25, 1942) of horse 1127, which had received mixed albumin and globulin subcutaneously.The serum was precipitated with ammonium sulfate and the fractions coming down at ~ saturation, and between ~ and ~a~ saturation were each divided into water-insoluble and soluble fractions.The reaction of one of these (serum 1127 J, ~,~ to ~,~ saturated water-soluble) with rabbit serum albumin has been describedin reference 2. The percentages of the total antibody recovered, as determined by analyses with an egg albumin anti-egg albumin specific precipitate, were: from the water-insoluble portion of the fraction precipitated by ~4~ saturation ~ith ~mmonlum sulfate, 13 per cent; from the water-soluble portion, 23 per cent; from the water-in~luble portion of the fraction precipitating between ~ and ~ saturation, 2 per cent; from the water-soluble portion, 62 per cent.The water-soluble antibodies, which, in other experiments (2), reacted with soluble rabbit albumin as do antitoxins with toxins, comprised 85 per cent of the total.
DISCUSSION
The production of antibacterial (anticarbohydrate) antibodies by the intravenous injection of horses has been shown to be correlated in most instances with an increase in the amount of electrophoretic -r-globulin (15,16), with occasional instances in which pneumococcus anticarbohydrate occurred in a new component (fl~ or T) with mobility between those of the t-and 7-globulins (6,16,17).On the other hand, antitoxin produced in the horse ocsurs almost exclusively in this new fraction, absent in most normal horse sera (18,19).
The electrophoretic patterns obtained in the present series of studies are, in general, those to be expected from the earlier work quoted.No indication is found in the patterns obtained with the sera of either horse 1046 or 1126 of the formation of a new component with mobility between the ~5-and 7,components.These horses received intravenous injections.In contrast, the patterns for sera 999 and 1127 clearly showed the formation of a new t2 or T component after subcutaneous injections of rabbit serum albumin.The antigens and the injection schedules-for horses 1126 and 1127 were identical; only the routes of injection were varied (Table I), horse 1126 having been injected intravenously, horse 1127 subcutaneously with the same albumin-globulin mixture (cf.(2)).
The antibody present in the serum of horse 1046 after 4 months of intravenous injections with rabbit globulin gave a typical precipitin reaction with a soluble antigen--normal rabbit electrophoretic 7 -globulin.This reaction (Fig. 1) is of the type given by pneumococcus anticarbohydrate (20) and antiprotein (3) in the horse.Characteristic is the absence of a prezone in the region of antibody excess; instead, the curves may be extrapolated to the origin.This is, of course, in marked contrast to the behavior of the rabbit serum albumin-horse anti-albumin system (2), and other examples of the so called flocculation reaction (11, 19, 21, and 22).
The data for the globulin anti-globulin system are best represented by an empirical equation (Table H) involving the first and the 3/2 power of the quantity of antigen added and precipitated, as first proposed for several other precipitating systems involving antiprotein formed in the rabbit (23,4,8,9).Although this equation has not yet been derived from fundamental considerations, as has another which best represents numerous other systems (Sb, 20d) it has the merit in these instances of fitting linearly a plot representing the ratio of antibody N: antigen N precipitated against the square root of the amount of antigen N added.For comparison of different sera, the data are recalculated to a common antibody content, for example, 1.0 mg.N per ml.
When the bleeding taken from horse 1046 after a 7.5 month intravenous course was set up with rabbit "), -globulin at two temperatures, 0 ° and 37 ° (Table III), a rather marked variation of reactivity with temperature was noted.
The antibody precipitable at 37 ° was only 0.123/0.174,or 71 per cent of that removed at 0. ° This resembles closely the findings obtained with anticarbohydrate systems in the horse (20 c) and in the rabbit (24).On the other hand, precipitating antiprotein (anti-egg albumin) in the rabbit (4) or flocculating antiprotein systems in horse sera (rabbit serum albumin (2); egg albumin (11,12); diphtheria toxin ( 21)) have practically negligible temperature coefficients.
The maximum combining ratio (20 d, 4) at 37°---obtained by extrapolation to zero antigen N of the line giving the variation of the ratio antibody N: antigen N precipitated with antigen N added--is also less than at 0 °.At the higher temperature it is calculated that only 8.6 mg. of antibody N can be removed per mg. of ~,-globulin N, compared with 14.4 mg. at 0 °.
As shown in Table IV, the total amount of antibody to v-globulin, determined at 0 °, increased very little (0.18 to 0.20 rag.N/5 ml.) in test samples of the serum of horse 1046 during the last 3 months of the intravenous injections.The quantitative properties of the antibodies show appreciable differences, however.Since the data for Table II were obtained on a dilution of a globulin solution, while those for Table III were on whole serum of different antibody content, it is necessary to compare them on some common basis, such as 1.0 mg. of precipitable antibody N. When this is done the following equations are obtained: It is evident from the above that the initial combining ratio (9.3) and the slope (10.9) characteristic of the March 18, 1940, bleeding (equation ( 1)) are both significantly lower than the corresponding constants for the later bleedings (equations (2 and 3)).The changes in these two factors are in accord with other quantitative data (4,12) and with general experience that the reactivities of antibody frequently tend to broaden on progressive immunization.
It will be noted from Table IV that the antibody concentration in the serum of horse 1046 dropped from 0.18 to 0.10 mg.N per 5 ml. at the end of the rest period following intravenous injection of globulin.The antibody content remained practically constant after a series of intracutaneous injections (bleeding March 10, 1941) and then increased markedly after a further course of subcutaneous injections to 0.33 mg./5 ml.(July 2, 1941).This increase, however, was due to the gradual replacement of the precipitating antibody by "univalent" antibody (4; 11, 12) which did not precipitate with soluble antigen.The value given was obtained by addition to the serum of a washed specific precipitate composed of egg albumin and rabbit anti-egg albumin.This antibody has been shown to be in the "r-globulin fraction (6) of rabbit sera.This device consequently permitted the use of rabbit T-globulin in an insoluble form with which the "univalent" antibody could combine and be measured quantitatively.
That the precipitating or "multivalent" form of antibody should not recur during the intracutaneous and subcutaneous injections subsequent to the rest period was indeed unexpected, especially since new antibody of low grade or "univalent" reactivity was produced.Mixtures in various proportions of earlier precipitating bleedings with the non-precipitating antibodies actually gave precipitates with soluble v-globulin (Table V), providing evidence against any markedly inhibitory action of the "univalent" antibody which might mask the presence of a small amount of residual precipitating antibody.Failure of the precipitating antibody to reappear when the route of injection was changed points strongly toward the essential independence of the physiological mechanisms for producing the two forms of antibody.
When the antibody is removed by attachment to preformed precipitates (egg albumin anti-egg albumin) (Table VI), lower values of the constants are obtained than in Table III, possibly because only the rabbit T-globulin molecules at the surfaces of the particles are available for interaction with the anti-globulin in the horse serum, or perhaps because of masking of portions of the rabbit globulin configuration by the egg albumin.
While most of the antibodies formed in horses after the subcutaneous injection of rabbit serum albumin are to be found in the water-soluble fraction of the globulins of the antisera (2), the antibodies developed in response to the intravenous injection of rabbit globulin are largely water-insoluble.The quantitative reaction curves (Tables II and III) are also similar to those obtained with bacterial earbohydrate-anticarbohydrate systems in horse sera (20 c, d) and show the same marked temperature coefficient.The antibody formed in response to subcutaneous injection of globulin differs most strikingly from that produced after intravenous injection in its failure to precipitate with globulin in solution.
Since it has now been amply shown that the zonal type of flocculation is not the only type of reactivity possible in antiprotein systems in horse sera the older classification into anticarbohydrate and antiprotein reactions therefore appears to be an oversimplification.According to Kendall (25) the differences in reactivity between the precipitin type of antibody and the zonally flocculating antitoxin can be accounted for quantitatively by the assumption that in the former molecule two groups reactive with antigen (bivalent antibody) are alike, and that in the antitoxin molecule the two groups differ in affinity.
Another instance is also provided of the occurrence of low grade antibody free from the precipitating form with which it usually occurs.Horse anti-egg albumin with this property has previously been described (11,12) as occurring in an early stage of the immunization, while serum from later bleedings gave a characteristic zone of flocculation.
It is accordingly clear that the horse can produce a number of antibodies with differing chemical, physical, and serological properties.The route of injection and the nature of the antigen are major factors in determining the type of re- One and one-half quantities actually used.
§ One determination only.
II S N = antigen suspension N added.
sponse.Rabbit serum albumin does not appear to be antigenic in the horse when administered intravenously but leads to the formation of the antitoxie ' type of antibody when given subcutaneously.Rabbit serum globulin, on the other hand, functions as an antigen by both routes, but stimulates the production of precipitating antibodies only when injected intravenously.
It is not clear what property of the antigen might be concerned in these effects.Egg albumin, serum albumin, and diphtheria toxin are of lower molecular weight than serum globulin, which produces precipitating antibodies in the horse, but molecular size cannot be the sole decisive factor since the subcutaneous injection of horses with hemocyanin (molecular weight 7,000,000) results in antibody of the antitoxin type (26).Clarification "of this problem must therefore await further study.SUMMARY 1.The intravenous injection of two horses with alum-precipitated rabbit serum globulin resulted in the production of antibody which gave a typical precipitin reaction without a prezone in the region of antibody excess.
2. The chemical, physical, and serological properties of this antibody are comparable to those of the more familiar anticarbohydrate antibodies.
3. The subcutaneous injection of horses with the globulin antigen gave rise to low grade "univalent" antibody which did not precipitate with soluble antigen.
4. The low grade antibody could be removed from solution by attachment to preformed specific precipitates, or by coprecipitation ip the presence of "multivalent" precipitating antibody.
5. It is concluded that the familiar antitoxin type of antibody is not the only form of antiprotein response in horses but that precipitating and low grade non-precipitating antibodies may also be formed.
6.The nature of the antigen and the route of injection are demonstrated to be important factors in determining the characteristics of the antibody formed.BIBLIOGRAPHY
Fig. 1 .
Fig. 1.Effect of temperature on the precipitation of horse antibodies to rabbit serum globulin by electrophoretically separated rabbit serum v-globulin.
TABLE I
Injection Sckedule of Horses Receiving Alum-Predpitated Rabbit Serum Globulin or Albumin-Globulin Suspensions
TABLE V
Addition of Non-Precipitating Antibody from H 1046, Bleeding July g, 1941, to PrecipitatesFormed from Rabbit 7-Globulin Antigen and Horse Serum H 1046, BleedingJuly 25, 1940
TABLE VI
Absorption of Antibodies to Rabbit Globulin in Sera of Horse 1046 by Means of a Specific.Precipitate . | 2018-05-25T20:25:39.954Z | 2003-01-01T00:00:00.000 | {
"year": 2003,
"sha1": "4e421d91374dd4a22485997ea4678ff5ae5b4ef6",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParseMerged",
"pdf_hash": "86c42eb488d52a82a166dcd8a5d08ddb6f4822b8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
254976357 | pes2o/s2orc | v3-fos-license | High MBL-expressing genotypes are associated with deterioration in renal function in type 2 diabetes
Introduction Accumulating evidence support that mannan-binding lectin (MBL) is a promising prognostic biomarker for risk-stratification of diabetic micro- and macrovascular complications. Serum MBL levels are predominately genetically determined and depend on MBL genotype. However, Type 1 diabetes (T1D) is associated with higher MBL serum levels for a given MBL genotype, but it remains unknown if this is also the case for patients with T2D. In this study, we evaluated the impact of MBL genotypes on renal function trajectories serum MBL levels and compared MBL genotypes in newly diagnosed patients with T2D with age- and sex-matched healthy individuals. Furthermore, we evaluated differences in parameters of insulin resistance within MBL genotypes. Methods In a cross-sectional study, we included 100 patients who were recently diagnosed with T2D and 100 age- and sex-matched individuals. We measured serum MBL levels, MBL genotype, standard biochemistry, and DEXA, in all participants. A 5-year clinical follow-up study was conducted, followed by 12-year data on follow-up biochemistry and clinical status for the progression to micro- or macroalbuminuria for the patients with T2D. Results We found similar serum MBL levels and distribution of MBL genotypes between T2D patients and healthy individuals. The serum MBL level for a given MBL genotype did not differ between the groups neither at study entry nor at 5-year follow-up. We found that plasma creatinine increased more rapidly in patients with T2D with the high MBL expression genotype than with the medium/low MBL expression genotype over the 12-year follow-up period (p = 0.029). Serum MBL levels did not correlate with diabetes duration nor with HbA1c. Interestingly, serum MBL was inversely correlated with body fat percentage in individuals with high MBL expression genotypes both at study entry (p=0.0005) and 5-years follow-up (p=0.002). Discussion Contrary to T1D, T2D is not per se associated with increased MBL serum level for a given MBL genotype or with diabetes duration. Serum MBL was inversely correlated with body fat percentage, and T2D patients with the high MBL expression genotype presented with deterioration of renal function.
Introduction: Accumulating evidence support that mannan-binding lectin (MBL) is a promising prognostic biomarker for risk-stratification of diabetic micro-and macrovascular complications. Serum MBL levels are predominately genetically determined and depend on MBL genotype. However, Type 1 diabetes (T1D) is associated with higher MBL serum levels for a given MBL genotype, but it remains unknown if this is also the case for patients with T2D. In this study, we evaluated the impact of MBL genotypes on renal function trajectories serum MBL levels and compared MBL genotypes in newly diagnosed patients with T2D with age-and sex-matched healthy individuals. Furthermore, we evaluated differences in parameters of insulin resistance within MBL genotypes.
Methods: In a cross-sectional study, we included 100 patients who were recently diagnosed with T2D and 100 age-and sex-matched individuals. We measured serum MBL levels, MBL genotype, standard biochemistry, and DEXA, in all participants. A 5-year clinical follow-up study was conducted, followed by 12-year data on follow-up biochemistry and clinical status for the progression to micro-or macroalbuminuria for the patients with T2D.
Results: We found similar serum MBL levels and distribution of MBL genotypes between T2D patients and healthy individuals. The serum MBL level for a given MBL genotype did not differ between the groups neither at study entry nor at 5year follow-up. We found that plasma creatinine increased more rapidly in patients with T2D with the high MBL expression genotype than with the medium/low MBL expression genotype over the 12-year follow-up period (p = 0.029). Serum MBL levels did not correlate with diabetes duration nor with HbA1c. Interestingly, serum MBL was inversely correlated with body fat Introduction Increasing evidence supports that activation of the complement system plays an important role in developing diabetes-related micro-and macrovascular complications (1,2). Mannan-binding lectin (MBL) is a serum protein primarily produced in the liver with the ability to distinguish between self and non-self-cells based on carbohydrate pattern recognition. MBL-binding to carbohydrate patterns initiates the lectin pathway, which leads to inflammatory cell recruitment, opsonization, and membrane attack complex formation (3). Hyperglycemia is associated with altered carbohydrate patterns present on the self-cell surface (4). Thus, MBL may cause inexpedient complement activation and tissue injury through binding to glycated self-tissue.
We and others have shown that high expression MBL genotypes and high MBL serum levels are associated with an increased risk of both micro-and macrovascular complications in patients with diabetes (9)(10)(11)(12). Additionally, we observed a significantly increased mortality in patients with T1D and the high MBL expression genotype as compared with the patients with a "non-high-expression" MBL genotype (13). High MBL level has been reported as an independent marker of diabetic nephropathy and cardiovascular disease both in patients with T1D (9,14) and T2D (15, 16). To our knowledge, no longitudinal studies have been performed in regard to MBL levels and development in renal function.
Several studies have found associations between serum MBL levels and insulin resistance (17,18), though in nondiabetic settings low serum MBL levels were associated with insulin resistance. A study of weight loss and changes in insulin resistance and serum MBL levels showed that weight lossinduced changes in serum MBL concentration were positively associated with the increase in insulin sensitivity (18). This indicates that MBL levels are influenced by the degree of insulin resistance.
We have previously found significantly higher serum MBL levels among patients with T1D compared to healthy subjects (9,12). However, the distribution of MBL genotypes did not differ between the groups (12) and therefore did not explain the phenotypic differences. These findings are supported by significantly higher serum MBL in patients with new-onset juvenile T1D as compared with their non-diabetic siblings matched for high-expression MBL genotype (19) as well as in animal studies (20).
It remains unknown if T2D per se is associated with altered MBL serum levels for a given MBL genotype. We have previously found comparable serum MBL levels in T2D and healthy individuals, but the relation to MBL genotype distribution was not clarified (21).
The current study aimed to investigate MBL serum level and MBL genotype in newly diagnosed T2D patients compared to age-and sex-matched healthy individuals. Additionally, we examined the relationship between MBL and renal decline as well as diabetes duration and insulin resistance.
Methods
Participants 100 patients with type 2 diabetes (T2D) were consecutively recruited from the outpatient clinic and 100 healthy subjects (controls) matched for age and sex were included in the study as previously described (22). Inclusion criteria at study entry were >18 years of age and, for patients, <5 years duration since diagnosis of diabetes. The diabetes-related treatment for all patients with type 2 diabetes was managed by their general practitioner according to the normal guidelines including insulin-treatment, other antihyperglycemic treatments than insulin, and non-pharmacological treatment such as guidance in a lifestyle intervention. The healthy subjects were recruited by advertising in the local press, and excluded if diabetes was diagnosed by fasting glucose and oral glucose tolerance tests. General exclusion criteria were: acute or chronic infectious disease, end-stage renal failure, pregnancy or lactation, prior or present cancer, and contraindications to MRI scanning (claustrophobia, magnetic material in the body, and body weight > 120 kg). Three T2D patients and one healthy individual were excluded from the study due to missing genotype, no serum MBL sample, or withdrawal of consent.
The participants were invited for a 5-year follow-up visit and 63 patients with T2D and 72 healthy controls attended (23). In total, 37 T2D patients dropped out during follow-up for the following reasons: follow-up invitation rejected (n=24), no contact (n=6), GAD-positive (n=2), deaths (n=5). Twentyeight participants from the control group dropped out during follow-up for the following reasons: follow-up invitation rejected (n=22), no contact (n=2), death (n=4). For this study, we obtained a dual-energy x-ray absorptiometry (DEXA), blood, and urine sampling, and the medical history of the participants was obtained by a questionnaire at both visits. A 12-year followup on clinical diabetes status was also performed: HbA1c, UACR, creatinine, and CRP were obtained from the medical records for the patients with T2D (n = 54).
The study was conducted according to the Declaration of Helsinki and was approved by the local Ethical Committee (1-10-72-349-13) and by the Danish Data Protection Agency (1-16-02-505-13), Denmark. All participants gave their written, informed consent to participate.
Blood analyses
Fasting blood samples were obtained from the antecubital vein. The serum was separated and stored at -80°C until further analysis. Serum MBL levels were measured using an in-house time-resolved immune-fluorometric assay with a lower detection level of 10 μg/L, as described previously by (24). The intra-and inter-assay variations (%CV) were below 10%. High-sensitive Creactive protein (hsCRP) levels were quantified by an in-house assay as previously described by (25). The limit of detection was 0.005 μg/L. The intra-and inter-assay variations (%CV) were below 5 and 6%, respectively.
All other blood and urine samples were analyzed with accredited methods at the Department of Clinical Biochemistry at Aarhus University Hospital.
MBL expression genotypes
Genomic DNA was extracted from whole blood using the Maxwell 16 System Blood DNA Purification Kit (Promega, Madison, WI, USA) according to the manufacturer's protocol. Genotyping for six SNPs (single-nucleotide polymorphism) in the MBL2 gene was performed using a real-time polymerase chain reaction with TaqMan SNP Genotyping Assays (Applied Biosystems, Foster City, CA, USA) as previously described by (26). Three SNPs are located within the promoter region of the MBL2 gene (rs11003125, rs7096206, rs12780112)]) and three SNPs (rs1800450, rs5030737, rs1800451) are located in exon 1 of the MBL2 gene. Because of linkage disequilibrium, the six SNPs give rise to seven major haplotypes: HYPA, LYQA, LYPA, LXPA, LYPB, LYQC, and HYPD that were further categorized into three MBL expression genotypes; low (O/O), medium (A/ O), and high (A/A) as previously described (5, 7). These MBL expression genotypes has previously been shown to be correlated with serum MBL levels below (≤100 μg/L), (101-1,000 μg/L), and (>1,000 μg/L), respectively (8).
Blood pressure
Ambulatory blood pressure (BP) was measured at 20-min intervals for 24 h using Spacelab 90217 (Spacelabs Healthcare, Issaquah, Washington, USA) in between study days. Office BP was measured on the right arm with an appropriately sized cuff, and mean SBP and DBP were calculated as the average of three measurements obtained by an oscillometric BP monitor (Riester Champion N; Riester GmbH; Jungingen, Germany) after more than 5 min of rest in the seated position. Mean arterial BP (MAP) was calculated as DBP + 0.4 × pulse pressure (PP).
Body composition
A dual-energy radiograph absorptiometry scan (DEXA Discovery System; Hologic, Marlborough, Massachusetts, USA) was performed to estimate lean and fat body mass.
Urinary albumin-to-creatine ratio and estimated glomerular filtration rate Urinary albumin excretion (UAE) was evaluated by albumin-to-creatinine ratios (ACR) in three morning urine samples. Patients were classified as microalbuminuric when at least two of three samples had urinary albumin-to-creatine ratios of 2.5:25 mg/mmol (men) and 3.5:35 mg/mmol (women) or above. The estimated glomerular filtration rate (eGRF) was calculated from the MDRD study equation from serum creatinine (27).
Statistics
Data are presented as mean ( ± SD) if normally distributed and median (IQR) if non-normally distributed. Given the exploratory aims of the cohort study, power calculations were not carried out. MBL measurements below assay detection limit was set to 10 μg/L. Comparisons between groups were performed using an unpaired ttest for normally distributed data and Mann-Whitney's U-test for non-normally distributed data. Comparisons of paired data points were achieved with the Wilcoxon signed Rank sum test. Multiple linear regression (Generalized Linear Model) was used to allow adjustment for confounders. The Chi-square test tested the difference in distribution between non-continuous variables. Spearman's correlation analysis was used to estimate the strength of the association between non-normally distributed variables. Pvalues were considered significant if <0.05. To characterize the effect of MBL expression genotype on different biomarkers mixed model ANOVA was performed, where data passed the normality test. All calculations were performed in R.
Results
Clinical characteristics of the T2D patients and their age-and sex-matched healthy individuals are shown in Table 1. The diabetes duration at study entry was 1.9 years (IQR: 0.7;3.2). All the participants had well-adjusted treatment, regarding glycemia, blood lipids, and blood pressure. The T2D patients had a higher body fat percentage, HbA1c, albumin/creatinine ratio (ACR), and hsCRP as compared to the healthy individuals. A significantly higher proportion of the T2D group was treated with statins, which may explain their significantly lower total-and LDLcholesterol levels compared to the control group. Similarly, the T2D group was more often treated with one or more antihypertensive medications, which was probably the reason for T2D patients not having higher blood pressure than the control group. Data from 63 patients with T2D and 72 controls were available for 5-year follow-up (Table 2) and the baseline characteristics for the paired group have previously been described (28).
MBL was reduced in the low MBL expression genotype of patients with T2D, 15 mg/L (IQR 11;21) at study entry versus 10 mg/ L (IQR 10;11) p=0.025 and in the control group, 24 mg/L (IQR: 10;30) to 16 mg/L (IQR: 11;29) p=0.4. However, due to the low sample size and values near the assay detection limit, the statistic should be interpreted cautiously in this subgroup. Adjustments for HbA1c, ACR, antihypertensive treatment, and statin treatment respectively did not change the results, neither in the total cohort nor after subdividing by MBL expressing genotype.
MBL difference between sex:
Men had higher serum MBL levels compared to women in the entire study group (991 mg/L (IQR 299;1831) vs. 460 mg/L (IQR 186; 1024), p= 0.004. This was also the case with men in the control group who had significantly higher serum MBL levels compared to women (1107 mg/L (IQR 318; 1950) vs. 442 mg/L (IQR 207; 1148), p= 0.03). However, the serum MBL levels was
Correlations between MBL, body fat %, and insulin resistance:
A weak inverse correlation between serum MBL level and body fat percentage was present in the entire group (r=-0.21, p=0.003). The outcome was not influenced by either HbA1c, antihypertensive treatment, statin treatment, or diabetes status when analyzing data in a multiple linear regression analysis. However, MBL expressing genotype significantly influenced the correlation between MBL and body fat percentage (p< 0.001). Subdividing by MBL genotype, a negative correlation with fat percentage was present for the high MBL expression genotype (r= -0.51, p=0.0005) (Figure 3). No correlations were found in medium or low MBL expression genotypes, which might be due to the low sample size. At the 5year follow-up, these correlations were still present in the total cohort (r= -0.268, p=0.002).
No correlations were found between MBL and HOMA-b or HOMA-IR and no significant differences in MBL serum levels were found in T2D patients treated with insulin compared to those treated with other anti-diabetic medication or with lifestyle intervention alone. Also, no significant correlations were found between serum MBL level and HbA1c, hsCRP, total-, LDL-or HDL-cholesterol respectively at either study entry or 5-year followup (Table 3). Statin treatment or any antihypertensive treatment was not significantly associated with altered MBL serum levels.
The difference in biomarkers for nephropathy 12-year follow-up: In participants with T2D, both eGFR and plasma creatinine differed significantly between high and medium/low expressing genotypes after 12 years of follow-up. The median for eGFR was 70.0 ml/min/1.73m² (IQR 55.5;85.5) in the high MBL expression genotype and 76.9 ml/min/1.73m² (IQR 80.9; 90.0) in the medium/ low MBL expression genotype, (p= 0.012). Plasma creatinine was significantly higher in the high MBL expression genotype (76.7 μmol/L (IQR 68.6;98.2)) as compared to 70.0 μmol/L (IQR 68.6;98.2) in the medium/low MBL expression genotype (p= 0.02). A mixed model ANOVA was performed to compare the effect of the MBL genotype on the development of plasma creatinine. For patients with T2D, we found that plasma creatinine increased more rapidly in the high MBL expression genotype than in the medium/low MBL expression genotype (Figure 4), F(2, 100) = 4.09, p = 0.029. Statistical testing of hsCRP, HbA1c and the ACR did not uncover any significant differences after 12-years of follow-up.
Data from 54 patients with were available for our final analysis (Baseline characteristics of participants attending versus those not attending the follow-up visit are listed in Table S1). Participants lost to follow-up were significantly Scatter plots with trend lines showing a possible linear relationship between body fat percentage and Serum MBL levels, divided by MBL expressing genotype. Grey dots represent healthy individuals and the black dots represent participants with T2D. Spearman's correlation analysis was used to generate the correlation coefficients and associated p-value.
Discussion
We showed a similar distribution of MBL genotypes in newly diagnosed T2D patients as in age-and sex-matched healthy individuals. The distribution of MBL expression genotype in healthy individuals has previously been reported in a large Danish population study (n=9245) (29). They showed an MBL expression genotype frequency of 58% for high (A/A), 37% for the middle (A/O), and 5% for low (or MBL deficient O/O). We have recently shown a frequency slightly shifted towards MBL deficiency (A/A 54.5%, A/O 30.8%, and O/O 14.7%) in a group (n=3043) of newly diagnosed T2D patients (16). This distribution is similar to the distribution found in the present study.
Here we showed that serum MBL levels for a given MBL expression genotype were not altered in patients with recently diagnosed T2D as opposed to what has previously been observed in patients with T1D. In a group of patients with T2D and suspected acute myocardial infarction (30), the distribution of MBL genotypes was comparable to the background population, however, as a group, the T2D patients with AMI had higher serum MBL levels than previously reported in the background population of T2D patients, which may be related to vascular stress (30). Although serum MBL levels are known to be determined largely by polymorphisms in the MBL2 gene, differences in circulating MBL levels of up to 10-fold can be found between individuals despite identical genotypes (6).
Portal hypoinsulinemia is present when the portal system is bypassed in subcutaneous insulin administration, as opposed to the pancreas-secreted insulin. The increased serum MBL level for a given genotype seen in T1D patients (9) has generated the hypothesis that the portal hypoinsulinemia characteristic of T1D may increase hepatic MBL synthesis. T2D, on the other hand, is characterized by insulin resistance, including hepatic insulin resistance. Thus, it seems plausible that T2D patients may also have high MBL serum levels for a given MBL genotype because of hepatic insulin resistance. However, the current study did not find evidence of this mechanism. A possible explanation for the lack of stimulation could be that hepatic insulin resistance is counteracted by increased endogenous insulin secretion, and as a net result does not affect MBL levels in T2D or the fact that the patients with T2D were not significantly more insulin resistant than the control group. The inclusion criteria included a maximum diabetes duration of 5 years. In vitro studies with human hepatocytes supported clinical observations of hormonal influence on MBL synthesis, with a significant increase in MBL expression after incubation with thyroid hormones and growth hormone (31), whereas insulin and IGF-1 had no effect. Interestingly, we found that MBL levels decreased after 5-years of follow-up, in healthy individuals with the high MBL expression genotype and remained unchanged in patients with T2D and was not explained by weight gain or change in fat percent.
We found a highly significant inverse correlation between MBL and fat percentage, determined by a whole-body DEXA scan, a very precise expression for obesity. This finding is supported by other studies. In women with polycystic ovary syndrome, obesity was associated with lower MBL levels (17). Among non-diabetic men, FIGURE 4 Dot plot with trend lines showing development in plasma creatinine in participants with T2D over the 12-year follow-up period. Black points represent participants with the high MBL expressing genotype (n=22) and the grey points represent participants with medium/low MBL expression genotypes (n=30). A mixed model ANOVA was performed to compare the effect of the MBL genotype on the development of plasma creatinine.
MBL serum level was significantly lower in obese subjects than in lean subjects. Further, MBL was significantly correlated with insulin sensitivity as assessed by euglycemic-hyperinsulinemic clamp (18). In a study of nine morbidly obese women, MBL levels also correlated with insulin sensitivity (32). An alternative explanation for the inverse associations between MBL levels and insulin sensitivity, suggests that MBL acts as an anti-inflammatory protein by promoting phagocytic clearance of various inflammatory agents, which would in turn cause subjects with low MBL levels to develop chronic low-grade inflammation, and ensuing insulin resistance or obesity. However, we have shown that MBL levels are unaffected by significant weight loss in obese subjects (33). In the current study, we found no correlation between MBL and hsCRP, an accepted marker of low-grade inflammation. We, therefore, find the hypothesis of MBL as an anti-inflammatory protein less plausible and inclined to the view that MBL secretion is under the regulation of insulin.
It seems somehow contradictive though, that fat percent (followed by insulin resistance) is inversely correlated with MBL levels, whereas T2D patients (insulin resistant) overall have similar MBL levels to age-and sex-matched healthy individuals. We speculate that in obese, non-diabetic individuals, insulin resistance is overcome by increased endogenous insulin secretion, making their net insulin action in the liver able to suppress MBL production. At the time when endogenous insulin secretion can no longer counteract insulin resistance (at the debut of T2D) the net insulin action in the liver is lower and does not overall suppress MBL, although, within the group of T2D, there is still an increasing suppression of MBL with increasing fat percent and obesity.
We found that MBL serum levels were inversely associated with fat percent and triglycerides in the group of T2D patients at study entry and also after 5 years of diabetes duration for the fat percent. A common pathway in the form of peroxisome proliferator-activated receptor-a (PPARa) for regulating MBL levels and triglycerides has been suggested by (34). PPARs are ligand-activated transcription factors known to regulate glucose, fatty acid and lipoprotein metabolism, energy balance, and inflammation, among others (35). Along with the regulation of lipid and glucose metabolism, PPARa is an attractive candidate gene for the risk of metabolic syndrome and T2D. Hepatic MBL2 gene expression and circulating MBL levels are reported to be stimulated by PPARa and fenofibrate in humans, linking PPAR to the regulation of innate immunity and complement activation in humans thus suggesting a possible role of MBL in lipid metabolism (34).
The present study shows an increasing plasma creatinine concentration over the 12-years follow-up period in the T2D patients with the high MBL expression genotype as compared to the medium/low MBL expression genotype. This corresponds with our previous findings, that the high MBL expression genotype was more frequent in T1D patients with diabetic nephropathy than in those with normal urinary albumin excretion (10) and that the presence of high MBL genotypes was associated with a 1.5-fold increased risk of developing nephropathy compared to patients with a 'low expression' MBL genotype (9) in patients with T1D. Also, serum MBL concentrations were significantly higher in patients with macroalbuminuria as compared to patients with normal albumin excretion rate, which persisted in the group of patients with high MBL expression genotypes (10), however, no SNPs in the MBL2 gene were reported to confer risk of T1D or diabetic nephropathy. Cai et al. showed that serum and urine MBL levels were higher in patients with T2D and diabetic nephropathy who were prone to develop end-stage renal failure (36), and several studies have reported an association between high plasma MBL levels and the development of diabetic nephropathy in patients with T2D (15, 37, 38). In contrast, Adrian et al. showed that the MBL genotype was not associated with long-term clinical effects in patients with end-stage renal disease, however, only 8% of the patients in the study had diabetes (39). We have recently shown a U-shaped association between serum MBL, high MBL expression genotypes, and risk of cardiovascular events in a large group of patients newly diagnosed with T2D but not with all-cause mortality, supporting the role of MBL in vascular complications (16).
Limitations
The group of T2D patients is newly diagnosed, and wellregulated and may therefore not be representative of T2D patients in general. Unfortunately, only 63% (T2D) and 72% (healthy controls) of the individuals were able or willing to participate in the 5-years follow-up study and we were only able to receive 12-year follow-up biochemistry from patients with T2D but no data from the healthy control group.
Conclusion
In conclusion, we found similar MBL serum levels for any given MBL genotype between T2D patients and matched healthy control subjects. Furthermore, we found a significant inverse correlation between fat percentage and serum MBL levels in the high MBL expression genotype, but the mechanism for this finding needs to be further investigated. We show deterioration of renal function, illustrated by lower eGFR and increased serum creatinine in T2D patients with the high MBL expression genotype.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
Ethics statement
The studies involving human participants were reviewed and approved by the Central Denmark Region Committees on Health Research Ethics (1-10-72-349-13) and by the Danish Data Protection Agency (1-16-02-505-13), Denmark. The patients/participants provided their written informed consent to participate in this study.
Author contributions
Conception and study design: GD, JØ, TH and MB. Patient recruitment and collection of clinical samples: PH, EL, KF, PP and TH. Methods and data analysis: GD, RS and MB. Manuscript drafts: GD and MB. All authors interpreted the data. All authors contributed to the article and approved the submitted version.
Funding
The study was funded by the A.P. Møller Foundation and Steno Diabetes Center Aarhus. | 2022-12-23T14:15:09.965Z | 2022-12-23T00:00:00.000 | {
"year": 2022,
"sha1": "cac4d62993be510939726108e38cf37bb0d90b53",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "cac4d62993be510939726108e38cf37bb0d90b53",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216197781 | pes2o/s2orc | v3-fos-license | Entrepreneurs, Teams, and Bureaucracy in Post-WWII America
This article leans against specialization by cutting across three disciplines to analyze the entrepreneurial function in modern, U.S capitalism. The author blends the basic ideas of Joseph A. Schumpeter (economics), Alfred D. Chandler (history), and Max Weber (sociology), with recent work done by Daniel Kahneman in behavioral economics. Two case studies are used to illustrate how these ideas interact in the study of innovation; one of the case studies focuses on a startup business and the other on a large, well-established, bureaucratic firm.
well be the most significant of the cognitive biases (Note 11)." Insofar as startups were significant during the post-WWII era, Kahneman makes us ponder the System 2 (slow thinking) capabilities they needed to succeed; they include: the ability to raise capital under conditions of uncertainty; to build an effective team; and to deal successfully with their business's competitive and political environments.
Insofar as large firms were innovative, Kahneman can benefit by accommodating Chandler's great contribution: his emphasis upon the benefits of the massive organizational changes that took place when U.S. business leaders transformed centralized into decentralized and diversified industrial firms.
Once again, there can be a creative sharing of ideas.
Max Weber and Kahneman
Now let's add our third great hedgehog to the history. The sample I am using from Weber's extensive work is the section of "The Theory of Social and Economic Organization" that deals with the three great structures of authority (Note 12). You will probably recall that Weber said bureaucracy "is superior to any other form [of authority] in precision, in stability, in the stringency of its discipline, and in its reliability." Central to Weber's theory was his conclusion that all of the modern, developed societies had moved relentlessly toward the bureaucratic structure of authority because it was the most rational and efficient way to organize large groups of people (Note 13). Weber's focus was primarily on the public sector, but he was mindful that modern businesses were also bureaucracies.
The task of blending Weber's sociology of "pure types" with behavioral economics and focusing the combination on entrepreneurship is challenging because Weber never seems to have worried about innovation and Kahneman only flirts briefly (pp. 417-418) with organizations and never specifically discusses bureaucracy. Despite this problem, I think we can link these two bodies of thought.
Fortunately, Chandler and Schumpeter have provided us with intellectual bridges to Weber through their descriptions of private sector bureaucratization and its implications for the capitalist system.
Schumpeter was intensely negative about both public and private bureaucratization; Chandler was intensely positive about what he referred to as "professional management". He was largely dismissive of the public sector.
Left suspended between these two evaluations, we can start to figure out where we stand by asking how public and private bureaucracy impacted the fast thinking (System 1) aspects of entrepreneurship.
Clearly-following analysts such as Robert Merton, James B. Taylor, and Alvin Gouldner-bureaucracy in all of its forms primarily impaired the entrepreneurial instinct. In both startups and large firms, the leaders needed to grapple with what Kahneman calls WYSIATI, the assumption that "What You See Is All There Is". WYSIATI and the closely related "sin" of hubris clearly kept many American executives from anticipating the intense global competition of the 1970s and 1980s (Note 14). Where slow thinking (System 2) aspects like obtaining capital, engaging in marketing and sales, and handling accounting were involved, however, the histories of entrepreneurship and bureaucratic authority are a mixed bag of negative and positive outcomes (Note 15). Perhaps the best way to start sorting those out is to look at some specific examples of entrepreneurship in postwar America.
Results: System 1 and System 2 Thinking in Two Entrepreneurial Ventures
For purposes of illustration (not proof) we can consider a specific example of successful innovation in the midst of the intense, destabilizing postwar competition. This case involves the classic startup firm with a clearly identified entrepreneur. The business in this instance is SNL, which should prompt you to reflect for a moment on the 1980s when the United States suffered through a Savings and Loan (hence S&L) crisis. The problem that entrepreneur Reid Nagle set out to solve, however, did not directly involve regulation, deregulation, or inflation-some of the problems used to explain the crisis.
Nagle's interest was in supplying accurate and timely information to those large firms that had economic ties to the S&Ls-a System 1, fast-thinking decision. He knew from experience that each of these organizations had to dig out and evaluate the information they needed from each S&L with which they were doing business. In effect, he was proposing to make business bureaucracies his market and make them more efficient by consolidating and selling information from the regulatory bureaucracy.
Nagle built a small organization around the task of searching for, consolidating, and selling information that was available to anyone. Using his own capital and advice from his network, he paid to have a unique computer platform developed for the data. In the course of these System 2 activities, he had the advantage of an extensive personal network and his prior experience in this corner of finance.
Despite Nagle's strengths, SNL had three major problems in its early years, all of which required substantial System 2 capabilities. Before the development of personal computers and the Internet, the work of gathering and processing information was extremely labor-intensive. Nagle, his wife, and staff had to dig out the information they needed in Washington, Xerox it, and then, back in their Hoboken headquarters, transfer it to their platform, process it, and print it out (Note 16). For a considerable length of time, Nagle was putting in the kind of 100-hour weeks we usually associate with two-job, recent immigrants and DC cab drivers. The second major problem was working capital. Like many a startup before and after SNL, Nagle ran out of money. He was forced to sell some of his time in consulting, and he "maxed" all of his credit cards before he managed to acquire the investment capital he needed without losing control of the firm. The third problem was competition. He was not guilty of WYSIATI. He knew he did not have patent protection for what SNL was doing and he knew a good bit about the other organizations selling the same or similar information. His competitors were, however, connected to other financial institutions that were likely to be competitive with the businesses Nagle had targeted as his likely customers. He saw where his competitors were vulnerable. That was his initial selling point and it proved to be effective. His bureaucratic competitors suffered a fit of WYSIATI and reacted too slowly. With his platform in place, he was then able gradually to expand, with very little additional cost, the information he could provide. As he did so, SNL competed by offering add-ons free and pushing additional competitors out of the market. Nagle's variant on this "Pac-Man strategy" was successful and would later be employed by a number of the high-tech giants of the digital era (Note 17). Happy to be an entrepreneur but less happy as a manager of a successful firm, he finally sold SNL to Standard & Poors for $2.25 billion in 2015.
This successful outcome leaves us with two big questions to ponder: First, if this type of innovation was going on across a broad front in the digital era-and there is substantial evidence that it was-why do the improvements not show up in our figures for Total Factor Productivity? Second, if bureaucratization was generating opportunities like this for innovation, does it seems possible that America's future will be with a workable form of democratic capitalism and a compromise between America's bureaucratic and entrepreneurial organizations and cultures? That, for the short-and the middle-term appears more likely than a Kafka-like dystopia. Gadsden was not a scientist; he had come to leadership through sales and marketing. When he followed this advice and appointed Dr. Roy Vagelos as head of basic research in 1974, he was embracing substantial uncertainty and making himself the financier (á la Schumpeter) of an entrepreneurial venture in a bureaucratic setting (á la Chandler and Weber). The Merck bureaucracy (contra Weber and á la Robert Merton's critique) quietly but forcefully resisted this transition to targeted, biochemical research.
The primary entrepreneur, Vagelos, had also embraced uncertainty (contra Schumpeter) because he had no prior knowledge of business. He had, however, substantial experience in science team building; he had extensive successful experiences in public (NIH) and non-profit (Washington University) bureaucracies; and he quickly built a team (System 2 á la Kahneman) to explore in cardiovascular treatments the kind of targeted research that was "on the tip" of biochemistry and enzymology (his primary professional networks) in the 1970s. The result of this effort was a break-through statin-a multi-billion-dollar drug-and his effort to transform the company's research and development did not end with this initial innovation (Note 20).
An additional challenge came in the vaccine division, one of the firm's most successful operations.
Indeed, the problem emerged following an outstanding innovation, a new vaccine to prevent Hepatitis B infections. After more than ten years of research, Maurice Hilleman, the head of Virus and Cell Biology, developed an effective vaccine by using particles of the antigen taken from the blood of carriers infected with the virus. Many of those carriers, however, were also infected with HIV, and that www.scholink.org/ojs/index.php/jrph Journal of Research in Philosophy and History Vol. 3, No. 1, 2020 32 Published by SCHOLINK INC. prompted Vagelos to look for a new way to produce the vaccine. Current developments in rDNA technology offered a solution, but Merck lacked the scientific and technological capabilities needed to move down that path (Note 21). After appointing a new head of the research effort and creating a three-headed alliance with a leading scientist and a biotech, Merck was able in 1986 to bring out Recombivax HB, the world's first rDNA vaccine (Note 22).
Discussion
As the SNL and Merck experiences, as well as other business histories from this era, suggest, the links between bureaucracy and innovation were more complex than any of our distinguished intellectual hedgehogs have indicated. While a bureaucratic culture might well impede innovation á la Schumpeter (and Merton), bureaucracies have continued to provide new opportunities for the individuals and firms selling them services (á la SNL). This allowed new firms like SNL to pursue their System 1 visions and test and improve their System 2 capabilities. Most of these efforts failed, as they always have in capitalist societies; WYSIATI has continued to foster discouraging mistakes. But the successes have and could in the future continue to foster disruptive innovations that keep the economy in disequilibrium (á la Schumpeter), on a positive growth path (Note 23).
Meanwhile, new sciences and new technologies have encouraged even well-established, bureaucratic organizations to change, to embrace higher levels of uncertainty and risk, and to encourage and sustain internal entrepreneurs (á la Merck and Chandler). If we scan across the entire U.S. postwar economy in the 1960s, 1970s, and 1980s, it will be apparent that this type of responsive, innovation-oriented private enterprise was the exception, not the rule, in many of America's leading industries. WYSIATI was the rule in automobiles, tires, and machine tools. Hence, the feeble U.S. business response when faced by intense overseas competition. This was also the case with the postwar public bureaucracies that provided more opportunities for others to innovate (á la SNL) than public innovations like the Internet (Note 24). Kahneman's behavioral economics helps us break open and analyze these activities, cultures, and organizations as we push forward with the history of American capitalism, its entrepreneurs, and entrepreneurial enterprises. | 2020-04-02T09:12:35.573Z | 2020-03-27T00:00:00.000 | {
"year": 2020,
"sha1": "9e2bc6aaa891c04c77c5a30358f4313c1e5e13f4",
"oa_license": "CCBY",
"oa_url": "http://www.scholink.org/ojs/index.php/jrph/article/download/2723/2762",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b94ecbce51f6dead4470bcfe9ef2ea04f68f8b39",
"s2fieldsofstudy": [
"Sociology",
"History",
"Economics",
"Business"
],
"extfieldsofstudy": [
"Political Science"
]
} |
247476079 | pes2o/s2orc | v3-fos-license | Fisher Forecasts for Primordial non-Gaussianity from Persistent Homology
We study the information content of summary statistics built from the multi-scale topology of large-scale structures on primordial non-Gaussianity of the local and equilateral type. We use halo catalogs generated from numerical N-body simulations of the Universe on large scales as a proxy for observed galaxies. Besides calculating the Fisher matrix for halos in real space, we also check more realistic scenarios in redshift space. Without needing to take a distant observer approximation, we place the observer on a corner of the box. We also add redshift errors mimicking spectroscopic and photometric samples. We perform several tests to assess the reliability of our Fisher matrix, including the Gaussianity of our summary statistics and convergence. We find that the marginalized 1-$\sigma$ uncertainties in redshift space are $\Delta f_{\rm NL}^{\rm loc} \sim 16$ and $\Delta f_{\rm NL}^{\rm equi} \sim 41 $ on a survey volume of $1$ $($Gpc$/h)^3$. These constraints are weakly affected by redshift errors. We close by speculating as to how this approach can be made robust against small-scale uncertainties by exploiting (non)locality.
Introduction
Primordial perturbations are, to a very good approximation, Gaussian distributed, as recently constrained by Cosmic Microwave Background (CMB) experiments [1] and galaxy surveys [2][3][4]. An even small deviation from Gaussianity has important implications for our understanding of the physics during inflation, revealing interactions among fields at very high energy (see [5,6] for reviews). Several experimental efforts will constrain these interactions with unprecedented accuracy in the near future (see [7] for a recent overview). While the constraining power of the CMB on primordial non-Gaussianity is relatively simple to predict, the same cannot be said about large scale structures (LSS) observations. The challenge is due to the fact that the cosmic web is contaminated by non-Gaussianities that are not primordial, but instead produced by gravitational instability. Yet, the signal-to-noise of galaxy surveys and intensity mapping experiments typically grows as k 3 max × V survey as compared to 2 max for CMB. Hence, LSS surveys have a strong potential to dominate constraints on primordial non-Gaussianity and inflationary models in general in the near future. 1 More broadly, a central problem in modern cosmology is the extraction of maximum information content regarding the universe's initial conditions and dynamics from observations. Beyond the traditional program of low-order correlation functions in Fourier space, there is an ongoing effort to characterize cosmological information content of real-space and higher-order statistics such as the k-nearest neighbor distribution [9][10][11] and one-point PDF [12][13][14][15][16]. Several of the present authors recently proposed a new class of statistics derived from computational topology for this purpose [17]. These statistics describe the multi-scale arrangement of data into clusters, loops, and voids, formalized by the theory of persistent homology (see [18,19] for general references). The theory of persistent homology has been fruitfully applied to sensor network analysis [20], image analysis [21], virology [22], protein structure [23], string theory [24], statistical physics [25], and more. Over the last several years, persistent homology has begun to see use in quantitatively constraining cosmology [17,26,27]. In the restricted context of dark matter-only simulations in a box, [17] found sensitivity to local primordial non-Gaussianity competitive with the scale-dependent bias, while relying on complementary information (i.e. the higher-order position-space arrangement of dark matter halos on mildly nonlinear scales rather than the low-order information in momentum space at large wavelength). In some sense, local primordial non-Gaussianity is special: even without resorting to high-order correlation function information, future surveys are already expected to significantly improve CMB constraints [28]. This is thanks to the fact that local primordial non-Gaussianity produces a modulation of short-scale modes by long-wavelength modes of the gravitational potential and its spatial derivatives, and therefore an effect of enhancement/suppression of the tracer power spectrum at large scales, where late non-Gaussianities are negligible [29][30][31] (see [32] for a recent review).
For other types of primordial non-Gaussianity there is no such an advantage. Detection of these non-Gaussianities would have interesting theoretical implications. For instance, a detection of equilateral non-Gaussianity at the level of f equi NL 1 would exclude slowroll inflation as a viable inflationary scenario [33,34]. Furthermore, if gravitational waves generated during inflation are ever observed, constraints of the order of ∆f equi NL ∼ 1 can be used to tell whether they were sourced by the vacuum or other fields (see e.g. [35,36]).
However, the outlook for f equi NL has been mostly grim. Recent measurement of the threepoint function of galaxies in BOSS using state of the art modeling has shown that limits are of the order of a few hundreds for f equi NL [3,4]. Pessimism for f equi NL is largely motivated by analyses based on low-order correlation functions in momentum space. When one restricts to these summary statistics, the primordial signal is degenerate with unknown aspects of nonlinear evolution, e.g. galaxy biasing. However, as recently explained in [37], one can do significantly better by considering survey data in position space, i.e. at the map level. The physical argument in [37] goes as follows. Uncertainties in gravitational evolution are constrained by locality, and therefore these can affect map-level information only on scales below some R * . On the other hand, primordial non-Gaussianity gives rise to non-local effects at late times, as local interactions during inflation have been stretched to super-horizon scales. Therefore there exist observables that trace primordial non-Gaussianity but remain insensitive to physics below R * . These observables are manifestly protected against smallscale uncertainties.
It is therefore of interest to characterize the information content of real-space statistics regarding primordial non-Gaussianity. In this paper, we perform this characterization for topological observables. Since topological features have extent, they can trace nonlocal correlations such as those generated by primordial non-Gaussianity. On the other hand, as we elaborate in Sec. 2, the features we study are grown locally by coarse-graining the halo distribution. In this sense, there is a cutoff we may apply in order to ignore irrelevant, i.e. small-scale, features. In this work, we find that simple, low-dimensional binned distributions of the linking scale at which a feature forms and the linking scale at which a feature is erased provide significant information regarding primordial non-Gaussianity. We estimate this information via a Fisher matrix formalism for N-body simulations, forecasting constraints of ∆f loc NL ∼ 15, ∆f equi NL ∼ 40 for a survey volume of 1 (Gpc/h) 3 . We begin by considering halos in position space. We then move to redshift space, considering observers both in the plane parallel approximation and in the wide angle case, finding that the constraints are not significantly degraded. We find similar robustness when redshift errors are included. We interpret this robustness as a consequence of well-known stability properties of persistent homology [38]. Therefore, we anticipate that, in the presence of a parameterized model of small-scale physics such as galaxy formation, persistent homology may provide a "manifestly protected" observable along the lines of [37].
The rest of the paper is organized as follows. In Sec. 2 we review necessary aspects of persistent homology, including the coarse-graining scheme we implement and the summary statistics we employ. In Sec. 3 we describe in more detail local and equilateral non-Gaussianity, and how they can be implemented in N-body simulations. In Sec. 4 we outline our methodology for computing the Fisher information, including our simulation set and implementation of redshift errors. In Sec. 5 we give the results of our analysis. In Sec. 6 we explore the robustness of our topological analysis to small scale effects. In Sec. 7 we conclude.
Formalism
Here we briefly outline the construction and interpretation of our summary statistics. For a more complete presentation on applying persistent homology to a cosmological setup, we refer to [17]. For general references on persistent homology, see [18,19].
Basics of Persistent Homology
The formalism of persistent homology allows us to characterize the multiscale topology of a data set. In our context, the data set under consideration is a catalog of dark matter halos in a cubical box, as a proxy for galaxies as observed by surveys. 2 In other words, we have a "point cloud" of halo positions. It is well-known that at late times, the distribution of halos features a rich hierarchy of interlocked voids, filaments, and clusters, often referred to as the cosmic web [40]. By hierarchy, we refer to the fact that these objects can appear and disappear as a function of scale. Once one has coarse-grained to a scale L, voids with radius R < L will be "filled in." Moreover, halos making up the boundary of a void need to connect at scale L in order for the surface of the void to close. Therefore every void can be associated with both a death scale and a birth scale. We illustrate this coarse-graining process in Fig. 1.
Formalizing this intuitive picture and generalizing to lower-dimensional features is possible within the framework of persistent homology. In the context of this work, the topological features found in three dimensions are clusters, filament loops, and voids. We compute the distribution of these features across birth and death scales and use how this distribution reacts to changes in initial conditions to forecast sensitivity to primordial non-Gaussianity.
By construction, the distribution of topological features traces correlation functions of all orders, beyond simply the two-point and three-point functions often considered in cosmology. In order to identify topological features, we embed our data into higher-order structures called simplicial complexes. The mathematical formalism that describes the topology of these complexes is homology.
Simplicial Complexes and Homology. A simplicial complex is composed of simplices, where a 0-simplex is a vertex, a 1-simplex is an edge between vertices, a 2-simplex is a triangle between three edges, and a 3-simplex is a tetrahedron between four triangles. We refer generally to p -simplices with 0 ≤ p ≤ 3. A simplicial complex is a collection of simplices that is closed under taking faces of a simplex (e.g. for an edge to be present, the two vertices at its ends must be present) and under the intersection of simplices.
Naturally, we associate each halo in a given simulation with a vertex. In order to compute the data's connectivity and higher-order topological aspects, higher-dimensional simplices must also be included. We describe our rules for these simplices in the next section. Given a simplicial complex, its topology is encoded in the linear operators ∂ p that take a collection of p-simplices to its (p − 1)-dimensional boundary. 3 In words, topological features correspond to collections of p-simplices with vanishing boundary that are not themselves the boundary In red is highlighted a collection of halos that will eventually correspond to a nontrivial homology generator, i.e. a hole. Middle Panel: at an intermediate coarse-graining scale, most halos have been connected via edges to their nearest neighbors. The red region now corresponds to a nontrivial homology generator. Right Panel: at a larger coarse-graining scaling, the "hole" in the center diagram has been filled in by triangles.
of a collection of (p + 1)-simplices. An example is shown in the central panel of Fig. 1. These features can be identified by manipulating the boundary operators ∂ p . In particular, one is led to define the homology groups H p ≡ ker ∂ p /im∂ p+1 . Informally, each element of H p is an independent "hole" of dimension p. Note that a 0-dimensional hole here is a connected component. In three dimensions, one also has filament loops (elements of H 1 ) and voids (elements of H 2 ). Formally, elements of H p are equivalence classes of collections of p-simplices with vanishing boundary. The ranks of the homology groups, which count the number of independent p-dimensional topological features, are called the Betti numbers b p ≡ |H p |.
Filtrations and Persistent Homology. As anticipated by our previous discussion, it is natural that the distance between two vertices should be the primary determining factor as to whether the two vertices should be connected by an edge. For example, one way to construct a simplicial complex from a point cloud is to pick a threshold length L and for every pair of vertices separated by distance d, connect them with an edge if d ≤ L. Similar rules may be employed to determine the inclusion of higher-dimensional simplices. It should be immediately clear, however, that the topology of the resulting simplicial complex will be highly unstable with respect to the choice L. Moreover, picking a single length scale L at which to view the data set cannot accurately convey a hierarchical distribution of features. These problems are resolved by representing our data set with a family of simplicial complexes rather than a single complex. In particular, we use a filtration, or a growing family of simplicial complexes. We parameterize a filtration with ν, which to first approximation is a length scale ν ∼ L. Each simplex in a filtration is assigned its own value ν 0 at which it is included in the complex. For example, for an edge between two vertices, we might have ν 0 = d. Then ν acts as a filter, so that only simplices with ν 0 ≤ ν are included at a given point in the filtration. Intuitively, ν gives the coarse-graining scale at which the data set is considered. As previously mentioned, topological features will be created and destroyed as ν is increased, with formerly disconnected components joining each other and holes forming and filling in. Given a filtration, the creation and destruction of individual topological features may be computed via matrix reduction methods. This is then persistent homology -we now have access to the fine-grained data of individual homology classes, including their creation, destruction, and length scales in between where the features persist. Note already that this is more refined information than the Betti numbers, which merely count the number of topological features of various dimensions. One way to summarize the topological information in a filtration is via a persistence diagram, which is a scatter plot of the filtration parameters ν at which topological features are created and destroyed. For ease of visualization, we often plot a persistence diagram with the axes (ν birth , ν persist ) ≡ (ν birth , ν death − ν birth ) since by definition ν death > ν birth . Examples of persistence diagrams generated for our data are shown in Fig. 2. In this work, we use what we call an αDTM -filtration [17,41]. This refines the filtration described above in two respects. First, rather than allowing an edge between any two vertices, the simplicial structure of a given data set is determined by a Delaunay triangulation. In other words, at a given filtration parameter ν, the complex forms a subcomplex of the Delaunay complex [42]. There is a computational overhead associated with computing the Delaunay complex, but the total number of simplices is dramatically reduced so that the tradeoff is worth it. Our second variation is in the assignment of filtration parameters to simplices. Namely, rather than a purely length-based filtration parameter, we find it useful to include information related to the local density of points in our data. In particular, this affords our filtration more robustness against outliers.
To accomplish this, we use the "distance-to-measure" or DTM function is the set of the k nearest neighbors to x in the point cloud.
The interpretation of DTM(x) is that it quantifies the extent to which point x is an outlier within the point cloud -DTM(x) is large with its k-nearest neighbors are far away, and it is small when its k-nearest neighbors are close by. A given vertex at position x is assigned ν = DTM(x) as its filtration value. For edges, we mix the DTM value with edge length, the interpretation being that a point x is surrounded by a ball of radius with q > 0, and an edge is included when the relevant balls overlap. Crucially, the edges are still taken from those present in the Delaunay complex. Higher-order simplices are then added to the filtration when all necessary faces are present. The effect of DTM(x) on r x (ν) is that the growth of balls around outliers is impeded. For example, if there are sparse points in the interior of a void, using r x (ν) from eqn. 2.2 rather than say r x (ν) = ν ensures that the void is not prematurely filled by simplices associated to outlier points. Therefore we track larger-scale features than in an α-filtration, see Fig. 3 for an illustration.
In this work, we take k = 15, p = q = 2, as in [17]. Currently these choices are based on heuristics. For example, we observe that for our data the DTM distribution with k = 15 is approximately normal. We note in passing that p, q can theoretically be tuned via gradientbased methods, which we leave to future work.
Additionally, we note that the DTM function also gives us precise information regarding the distances to nearest neighbors of a point. Recently, the distribution of such distances has been considered as a summary statistic in its own right [9][10][11]. Results from those works suggest that it could be beneficial to simultaneously consider the topology for different values of k.
From Persistence Diagrams to Summary Statistics
As mentioned, the topological information content of a filtration is summarized via a persistence diagram. As a collection of points, a persistence diagram is not directly amenable to statistical analysis. We would prefer a topological summary that lives in a vector space. There are many proposals for how to perform this map (or more generally how to compute an inner product on the space of persistence diagrams) [43][44][45][46] -in general, any permutationinvariant function is allowed, and the map can be parameterized by a neural network [47,48]. Persistence images [45], which are essentially smoothed histograms of a persistence diagram, are particularly useful for visualizing the density of topological features. In the smoothing kernel, one usually includes a persistence-dependent weight such as ν persist or log(ν persist ) to emphasize long-lived features, or alternatively because most features are short-lived. Some examples of persistence images are shown in Fig. 2. In our present context, we lack the data volume to train a neural network, and favor a compressed representation so that covariance matrices may be estimated. We therefore use as summary statistics histograms representing the distribution of births and distribution of deaths for cycles of various dimension. We employ cutoffs at the 99.9th percentile for a fiducial simulation, to remove sparse outliers. Subsequently, each distribution is summarized by 5 bins, for a total of 30 bins. This is to be compared with 120 simulations of volume 1 (Gpc/h) 3 . We expect that more information can be extracted from an optimized vectorization of persistence diagrams. Optimizing these representations will require a larger amount of data than we consider in the present work.
primordial curvature perturbation. Its information is encoded in the bispectrum in Fourier space. Although the exact form of the bispectrum is model-dependent, non-Gaussianities can be classified by the triangle configuration where their contribution is strongest. The most popular primordial non-Gaussianities are related to so-called local and equilateral shapes, which peak on squeezed and equilateral triangle configurations, respectively. Separable templates for these shapes have been introduced [49] for a fast implementation in observations. Here we define these templates and give a brief review of how they are implemented in the numerical simulations that we use for our study.
Local type. Primordial non-Gaussianity of the local type can be written as a Taylor expansion around a Gaussian field of the primordial fluctuations [50], around a given position x: with ζ the comoving curvature perturbation, ζ G a Gaussian random field, and f loc NL parametrizing the amplitude of non-Gaussianity. 4 The expansion Eq.(3.1) generates the following bispectrum which peaks on squeezed triangle configurations of the bispectrum, i.e. when correlating a long-wavelength mode with short-wavelength modes. This parameterizes a physical correlation between small-scale physics and the large-scale gravitational potential, induced during inflation. Such a correlation is known to generate a scale-dependent enhancement or suppression (depending on the sign of f loc NL ) at very large scales in the galaxy two-point correlation function (see [32] for a review), which allows constraining this type of non-Gaussianity quite well using galaxy surveys.
Equilateral type. Another type of primordial non-Gaussianity is so-called equilateral, which is characterized by a bispectrum that peaks in the equilateral configuration, i.e. when all sides of the triangle are equal. A popular model producing this bispectrum is the Dirac-Born-Infeld (DBI) model [55][56][57]. We parameterize this type of non-Gaussianity with the usual template [58] B equi Here the normalization of the amplitude f NL is fixed such that f equi NL ≡ f loc NL in the equilateral limit.
Implementation in N-body simulations
N-body simulations of the universe on large scales solve the non-linear equations describing gravitational evolution from an early time, where perturbations are linear, until late times. Initial displacements of the particles can be set using Lagrangian perturbation theory (LPT) at an early redshift z ∼ 100. If the initial conditions of the simulation are Gaussian, particles are randomly distributed on a mesh grid and then displaced by the LPT displacement field ψ [59]. Implementing non-Gaussian initial conditions therefore requires an additional displacement by ∝ f NL ∇ζ G on the particles as they are assigned to the grid, before being displaced by ψ.
In the case of local type primordial non-Gaussianity, such displacement is trivial to implement, as Eq. (3.1) is defined for each position x, so it can be applied particle-by-particle by just converting ζ to the gravitational potential Φ. Note that primordial correlations of 4 and more points are generated by this procedure. For example, the quadratic term in Eq. (3.1) generates a primordial 4-point function of the form ζζζζ ∼ (f loc NL ) 2 ζ G ζ G ζ G ζ G . This is somewhat artificial in the sense that we neglected all higher powers of ζ G in equation (3.1) (such as g NL ζ 3 G ). These ignored terms could generate higher-order primordial correlation functions of the same order or larger than those generated by the terms we kept. On the other hand, for small values of f loc NL , higher-order primordial correlation functions of ζ are suppressed by powers of f loc NL ζ, though they could in principle have a small effect at small scales. We cannot rule out the possibility that the effects that we observe are in part due to these higher-order correlation functions. However, they are all sourced by the direct coupling between long and short modes introduced in Eq. (3.1), so we are constraining the effect of such a coupling.
The equilateral case is slightly more complicated, but has been extensively studied in the past (see [60] and [61] for reviews). In this case, the generation of initial conditions involves finding a random field that satisfies the bispectrum shape of Eq. (3.3), which is not a unique operation. This can be done by introducing an appropriate quadratic displacement. However, this procedure also sources small higher-order primordial correlation functions. A generic algorithm was developed in [60] (see also [62]), in which bispectrum template is decomposed into factorizable functions. This allows one to generate a one-parameter solution for the inversion. The parameter can be used to constrain the appropriate behaviour of the equilateral shape in the squeezed limit.
Primordial correlations at all higher orders are generated by this algorithm. These are somewhat more artificial than the case of local non-Gaussianity, since they are simply a byproduct of the procedure used to compute the initial displacements. On the other hand, they are also suppressed by powers of f NL ζ. Again, the effects we observe could be due in small part due to these primordial higher-order correlations. However, the coupling between scales in this case is very different from the "local" one: initial conditions are adiabatic and the coupling between long k L and short k s wavelength modes is suppressed by k 2 L /k 2 s [51-53, [63][64][65]. Thus, we are studying the effect of non-Gaussianity in the absence of such a coupling.
Methodology
In this section, we review all steps of the method, from defining these summary statistics, the dataset that we use and the Fisher formalism we employ, along with several consistency tests.
Dataset and persistence calculation
To perform our analysis we use the Eos Dataset, 5 which includes simulations with primordial non-Gaussianity of equilateral and local type. The initial particle displacement is generated using the public parallel code L-PICOLA [67] for realizations with equilateral non-Gaussianity and with 2LPTic [68,69] for realizations with Gaussian and local non-Gaussian initial conditions. The cosmology is flat ΛCDM with σ 8 = 0.85, h = 0.7 and Ω m = 0.3. Initial condition are generated at z in = 99. The public code Gadget2 [70] is used to evolve 1536 3 particles in a cubic box of 2 Gpc/h per side. Each box has 15 different realizations. Gaussian initial conditions simulations are named G85L hereafter. Simulations with non-Gaussian initial conditions of local and equilateral type are initialized with f loc NL = 10 and f equi NL = −30 respectively and we refer to them as NG10L and ENGm30L. For visualization purposes (see Figure 4) we also use simulations with stronger signal with f loc NL = 250 and f equi NL = −1000, labeled as NGp250L and ENGm1000L, respectively. For our analysis, we identify halos in each simulation using the code Rockstar [71], identifying candidate halos with a minimum of 50 particles using a Friends-of-Friends (FoF) algorithm with a linking length λ = 0.28 at redshift z = 1. This results in halos with minimum mass M min = 9.2 × 10 12 M /h, which would host a galaxy population roughly compatible to the BOSS CMASS sample (see [72] for an application of an HOD model to a similar halo population). As explained just below, we divide each simulation box into 8 sub-boxes of 1(Gpc/h) 3 volume, which makes a total of 120 sub-boxes per cosmology.
Implementation of redshift errors. Part of our analysis is devoted to halos in redshift space. We are going to adopt two different prescriptions to displace halos from real to redshift space: in the plane-parallel approximation we displace halos along one of the axes parallel to a side of the boxx,ŷ, orẑ. In the other case, we displace them along the vector between the observer (at the origin of the box) and the halor.
We also mimic redshift errors by trading redshift uncertainties with velocity uncertainties at fixed redshift in the box. We distinguish two hypothetical cases, corresponding to a spectroscopic sample and a photometric sample. Since the typical velocity dispersion of the Eos halos is around σ v ∼ 300 km/s, we assume a spectroscopic survey which has an error on redshift estimation corresponding to an uncertainty on the velocity of about 5 times smaller than σ v . Hence, we displace halos using a random Gaussian distribution centered on the redshift space position and with standard deviation σ spec ∼ σ v /5 = 60 km/s. As for the photometric sample, we take an hypothetical error on redshift determination of ∆z/(1 + z) = 0.01, which translates to σ photo = 3000 km/s. In this case, we do not displace halos by their peculiar velocities since they are negligible with respect to σ photo . In summary, we have the following structure for the plane parallel approximation case: where s is the position in redshift space, H is the Hubble parameter,n is the line of sight, corresponding to the z-axis for the plane parallel approximation and to central vector for the wide angle observer. The error is represented by v X with X = {spec, photo}, which is a random variable drawn from a normal random distribution with variance σ X and zero mean. Persistence calculations. We construct the αDTM -filtration using the public code 6 developed in [17]. The algorithm is run on sub-boxes of 1 Gpc/h per side of each simulation box, for a total of 8 sub-boxes per simulation. As discussed in [17], most of the features are born and die at scales much smaller than the size of each sub-box, hence effects from the boundary of a given sub-box may be neglected. 7 Interestingly, the signature of primordial non-Gaussianity of the local type does not peak on large scales as in the case of the galaxy/halo two point correlation function, but around a birth scale of order O(10) Mpc/h. We show the difference in persistence images between f loc NL = 250 and f loc NL = 0 simulations in Figure 4. The strongest differences are seen between birth scales of around ∼ 10 and ∼ 30 Mpc/h. As discussed in [32], the observed shift to smaller birth scales might be connected to the change in inter-halo distance caused by primordial non-Gaussianity and for 2-cycles it has some similarities to the change expected in the void size distribution found in [73]. We observe a smaller change for equilateral non-Gaussianity (bottom panels) and a reversed behaviour: features are born at larger scales in the presence of primordial non-Gaussianity. This might simply depend on the different sign chosen for f equi NL . The difference in amplitude between equilateral and local shapes might be due to the effect of scale dependent bias, but we deserve a more accurate investigation to future work. In summary, we produce persistence diagrams for 0-, 1-and 2-cycles for the αDTM -filtration for each of the 8 sub-boxes, for each of the 15 realizations, so as to have effectively 120 realizations in total per cosmology.
Summary Statistics
Previously, [17] extracted information from persistence diagrams via the Betti numbers and several empirical distribution functions.
These statistics contain explicit information about how the feature scales of birth, death, and persistence respond to primordial non-Gaussianity. Given 3 dimensions of homology and 3 types of empirical distribution functions, the number of bins in these statistics was quite large, presenting an obstruction to building a covariance or Fisher matrix. Tests built on hand-crafted templates nevertheless proved successful in identifying small levels of primordial local non-Gaussianity. As introduced in Section 2, we construct a low-dimensional summary statistic via histograms of birth and death scales of cycles of each dimension. 8 For varying cosmology, we expect to have measurable differences between ∼ 10 and ∼ 40 Mpc/h, given what is observed in the persistence image (cfr. Figure 4).
Choice of binning.
In the interest of an invertible covariance matrix and a reliable Fisher matrix, given that we have 120 realizations in total per cosmology, we need a summary statistic with fewer than 120 bins. Choosing the optimal number of bins necessitates a compromise between two competing effects: with too few bins, we loose constraining power; with too many bins, the (inverse) covariance matrix is not reliable. (Additionally, for too many bins fluctuations in some bins may not be Gaussian.) As a trade-off between these effects, we use 30 bins, corresponding to 5 bins per distribution. We show an example of our data vector for the fiducial cosmology in Fig. 5.
Mean data vector, G85L Data derivatives
Fisher Matrix
In order to estimate the constraining power of our summary statistics, we compute the Fisher matrix. The Fisher matrix encodes the optimal constraints that can be derived from a summary statistic [74,75]. For Gaussian-distributed data, and neglecting the dependence of the covariance on the parameters, 9 the Fisher matrix is given by where D is the data vector of size N b = 30 bins, averaged over 15 realizations each with 8 sub-boxes for each cosmology, X ,i = (X(θ i ) − X(θ i = 0))/θ i denotes the numerical derivative of X with respect to model parameter θ i = f loc NL , f equi NL . The covariance matrix C is built directly from the data vectors, where i, j = 1, ..., 60 run over the elements of the data vector of the Gaussian initial conditions simulations, G85L, D mean is the mean data vector over 15 realizations and 8 sub-boxes for a total of N = 120, and · = 1 · is the average over N . Since we use the data covariance matrix, when inverting we include a correction factor [77]. Given the Fisher matrix, the marginalized information on model parameter θ i corresponds to a 1-σ constraint of σ i = (F −1 ) ii .
Numerical derivatives. For a reliable estimate of the numerical derivative, it is generally preferred to use small deviations of the reference parameter, assuming that for these the response of the quantity to the parameter is linear. We use f loc NL = 10 and f equi NL = −30 to compute numerical derivatives. For these amplitudes, the numerical derivatives of our summary statistics can be estimated reliably, i.e. they are not noise-dominated. We show convergence of our results with respect to the simulation volume used to estimate the derivatives and covariance in App. A.1. As observed in [17], the change in our summary statistics scales nonlinearly with f equi NL , f loc NL . In particular, the response scales as D(f equi NL ) − D(f equi NL = 0) ∼ (f equi NL ) n for 0 < n < 1. See App. A.2 for details. We show the numerical derivatives in Fig. 6.
Gaussianity tests. True constraints will be most closely related to the Fisher estimate if fluctuations of our summary statistics are Gaussian. We visually inspect our bins, drawing a histogram of counts for all the realizations given a bin and verifying that all bins are Gaussian distributed. Since each bin by definition includes a non-negative number of counts, crucial to this approximation is that each bin of our distribution is sufficiently populated, namely that the mean is several times larger than the square root of the variance. Imagining Poisson noise, we would have that the variance scales with the mean. With 60 bins, we find that each bin has a mean of at least 4.5 σ, while we find 10 σ for 30 bins. We decide to use the latter configuration. As a further test of the Gaussianity, we perform a Kolmogorov-Smirnov test [78] for this choice of bins. We find that our summary statistics passes the test well within the 95% confidence limit. 10 9 In our setup, the covariance matrix actually depends on parameters significantly. One would then be tempted to add the term 1 2 Tr C −1 C,i C −1 C,j to Fij, i.e. the contribution from the derivative of the covariance. However, if the mean and variance of our statistics are not independent, including this term can overcount some the information content of a statistic [76]. Since our summary statistics are derived from discrete counts of features in bins, their fluctuations are closely related to those of Poisson statistics. For a Poisson distribution, the variance equals the mean, and the Fisher matrix is only given by the term in (4.3). 10 We thank an anonymous referee for suggesting the test.
Results
We present our main results in Figure 7. First, we show on the left panel the uncertainty on f loc NL and f equi NL obtained from the Fisher matrix as described in the previous section, for a fiducial cosmology with Gaussian initial conditions. We present the results for the analysis in real space (blue contours), in redshift space in the plane-parallel approximation (green contours), and in redshift space without the plane-parallel approximation where the observer is in a corner of the simulation box (orange contours).
The 1-σ marginalized constraints are of the order of ∆f loc NL ∼ 15, ∆f equi NL ∼ 40, with small differences among the three cases. The degeneracy between f loc NL and f equi NL is seen to be small. As a consequence of this, the constraints on one of the two parameters do not degrade very much when marginalizing over the other. The redshift space analysis gives slightly weaker constraints, with the plane-parallel case being more pessimistic. The analysis in redshift space is straightforward to perform since we need to simply displace the halos' apparent position and run the pipeline on the displaced halos. In particular, the analysis in the wideangle case is as easy as the plane-parallel case. This is an advantage over analyses based on the comparison of the measured power spectrum to the theoretical prediction based on perturbation theory, where computing wide-angle effects is not trivial [79].
Impact of redshift errors
We estimate the effect of photometric and spectroscopic redshift errors on this analysis, which are modeled as described in Sec. 4. We present these results in the right panel of figure 7. We present error contours for the case with no redshift We compare the constraints obtained from a reduced volume (orange) to the ones already shown for the full one (blue). The reduced volume is 0.125 (Gpc/h) 3 , which is 8 times smaller than the volume considered in the previous forecasts.
errors, photometric errors, and spectroscopic errors. In all these cases the observer is in a corner of the simulation box. The results shown were obtained after averaging over six realizations of the errors. Overall, the inclusion of redshift errors leads to a mild degradation of the constraints. As expected, photometric errors have the largest impact, degrading constraints by ∼ 50% for f equi NL and ∼ 25% for f loc NL . Spectroscopic errors only change the contours slightly, by ∼ 20% for f equi NL and 10% for f loc NL . All these results are summarized in Table 1. The protection against random perturbations of the points' location in the volume is a typical property of persistent homology, so we might say that these statistics are protected from redshift errors by construction. See Sec. 6 for more discussion on this point. It is also worth pointing out that redshift errors are instead a strong source of degradation for a joint power spectrum-bispectrum analysis [80], especially for equilateral non-Gaussianity.
Real Space RSD Plane Parallel RSD Wide Angle +v spec -error +v photo -error ∆f loc NL 13.9 16 Dependence on volume We also wish to estimate how these constraints scale with the volume of the survey. For this, we repeat the real space analysis reducing the volume of the sub-boxes by a factor of 8. This is done by dividing each full simulation box of 2 Gpc/h aside into 64 sub-boxes of side 500 Mpc/h. The results are presented in figure 8. We see that this reduction of volume leads to a degradation of the contours by a factor of roughly ∼ 3. This factor of ∼ 3 is what one would expect from a Poisson-like distribution. In this case, we have that D ,i ∼ vol and C −1 ∼ 1 vol , so that F ij ∼ vol. Then marginalized constraints scale as σ ∼ (F −1 ) ii ∼ (vol) −1/2 . We have confirmed numerically that D ,i ∼ vol and C −1 ∼ 1 vol for our simulations, so that σ 500 σ 1000 ∼ √ 8 ∼ 3, in line with the scaling in Fig. 8.
More precisely, we find ∆f loc NL = 44.0, ∆f equi NL = 134.1. The fact that the scaling is slightly worse than expectation could be due to the fact that a smaller box has larger surface-area-tovolume ratio, so that some features are cut off by the box's boundaries and D ,i scales slightly sublinearly with volume.
Comments on robustness against small-scale uncertainties
We have seen that redshift errors do not significantly degrade our forecasted constraints on primordial non-Gaussianity. In this section, we discuss more broadly the robustness of our topological statistics against general small-scale uncertainties. First of all, we should note that while a filtration is defined semi-locally, topological features are by definition non-local. Therefore, they are suited to tracing non-local statistics of the survey map. Consider local small-scale uncertainties parameterized by the scale R * . For example, R * can be a scale at which gravitational nonlinearties (or galaxy formation) cannot be modeled by our simulation, as in [37]. Alternatively, R * could be the typical scale of redshift errors, as in the previous section.
What impact can perturbations at the scale R * have on our topological statistics? At their most dramatic, these perturbations can create or destroy topological features. However, these features are necessarily parameterized by a scale R < R * . For example, a loop parameterized by scale R R * cannot be destroyed by a perturbation of scale R * . On the other hand, we expect that the birth and death scale of this feature can change, but only to O(R * ). This can be regarded as a form of topological protection. This process is depicted in Fig. 9.
Therefore the coarse-graining scale implemented by a filtration allows us to identify and cut topological features that may depend on small-scale uncertainties, for example ignoring feature that die below the scale R * . Additionally, significant topological features may only be affected to order R * , so that under suitable binning the distributions of these features are robust.
Note that this intuitive argument is essentially a statement about the continuity of persistence diagrams under perturbations to the data. For simpler filtrations than ours, this statement can be formalized and proven as a "stability theorem" [38]. From previous analyses of Eos [81], we have that a conservative estimate for R * in our present context is the size of the largest halo, R * ∼ 2 − 5 Mpc/h. From Fig. 2, we see that all topological features are born and die beyond this scale. Therefore at the level of dark-matter-only simulations, small-scale dynamics is not expected to change our constraints. Note that a similar scale of perturbations is provided by redshift errors in eqns. (4.1,4.2).
The more pressing question comes when comparing N-body simulations to galaxy survey data, for which the dynamics by which galaxies form and populate halos is important. As an example for small-scale uncertainties in our present context, we can think of a typical Halo Occupation Distribution (HOD) algorithm, which describes how halos are occupied by galaxies. In HOD models, low-mass halos may or may not be populated by any galaxies, while for high-mass halos one may have not only central galaxies but also "satellite" galaxies, depending on a set of parameters. The situation for high-mass halos is not worrisome due to the previous argument. The deletion of low-mass halos is slightly different in nature. Since most halos are in fact low-mass, this can have the effect of significantly changing the number density of tracers, which can affect the scales at which topological features form and disappear. Additionally, we might expect that deleting a halo can have significant effect on nearby topological features.
To estimate the effect of such deletions, we perform a procedure similar to [17], where each halo catalog was randomly sub-sampled so that data derivatives were computed at fixed number density. Considering that most halos are at low-mass, this procedure should, at least qualitatively, mimic the HOD effect on central galaxies we just discussed. As a test on the current setup, we therefore sub-sample the halo field by imposing that all sub-boxes across all cosmologies have the same number density. This amounts to randomly removing halos up to O(5)% over the total number. Since this procedure is random, we perform it several times with a different random seed. We show this effect in figure 10. Sub-sampling degrades the forecasted constraint on f equi NL by a factor of ∼ 2. On the other hand, constraints on f loc NL are more robust, degrading by about ∼ 25%. The degeneracy is also increased. We run several realizations of the random sub-sampling. We observe that the constraints converge for more than 3 realizations. These results call for a more careful quantitative analysis of these uncertainties using a calibrated HOD implementation. Additionally, we might expect that a smart choice of filtration would mitigate the effect of deleting some points. We leave this to future work.
Conclusions
We have presented a class of summary statistics that trace information regarding primordial non-Gaussianity at the map level. Our Fisher estimates demonstrate that significant information can be extracted by considering higher-order structures in position space. The particular structures we exploited are topological in nature, describing clustering, filament loops, and voids across cosmological scales. We forecast ∆f loc NL ∼ 16 and ∆f equi NL ∼ 41 using a volume of ∼ 1 (Gpc/h) 3 in redshift space without the distant observer approximation. Our results are quite robust to uncertainties in the determination of the redshift of each halo. Contours degrade by 20% when considering spectroscopic errors, and by 50% when considering photometric errors.
These results show a strong potential for primordial non-Gaussianity as compared to recent analyses. For instance, a Fisher forecast using the halo power spectrum and bispectrum in real space on the Eos simulations considered in this work gave ∆f loc NL ∼ 20 using full boxes of 8 (Gpc/h) 3 volume with a realistic theoretical covariance [82]. As for real data constraints, the power spectrum and bispectrum as measured from the BOSS survey was recently analyzed in the context of primordial non-Gaussianity and gave constraints of the order of ∆f loc NL ∼ 50 and ∆f equi NL ∼ 200 at 68% confidence on a volume of 2.4 (Gpc/h) 3 [4]. There are several avenues forward. For one, we have presented in this work expected constraints based on a Fisher matrix formalism. These implicitly Gaussianize hand-crafted summaries of our topological statistics. While our hand-crafted summaries are well-approximated by Gaussian distributions, they do not necessarily convey the entire information content of a persistence diagram. The tradeoff is that the likelihood for our persistence diagrams is only implicitly defined. Parameter estimation in the context of implicit likelihoods is precisely within the purview of the rapidly advancing field of simulation-based inference ( [83][84][85][86], see [87] for a recent review and [88][89][90][91][92] for cosmological applications). Within simulation-based inference it has been advocated (see e.g. [88]) to perform data compression in two steps: first by constructing by hand an interpretable summary statistic and then by allowing a neural network to extract the maximum information content from that statistic. In our context, we would eventually like to tune our topological summaries for manifest robustness against small-scale effects. In some sense, this is a form of nuisance-hardening [93].
Ultimately, precise robustness against small-scale physics such as HOD must be verified numerically. This will require the development of a new simulation suite that includes both primordial non-Gaussianity and HOD. We may also speculate as to whether the physical principles elucidated in [37] would allow for fast and cheap N-body simulation strategies that directly target accuracy at the length scales where primordial non-Gaussianity can be directly distinguished from late-time effects. At any rate, the summary statistics introduced in this work demonstrate that significant information regarding primordial non-Gaussianity can be extracted in a manifestly position-space approach. Although we cannot yet demonstrate robustness against every conceivable small-scale effect, we anticipate that when considering more realistic models, the position-space definition and topological nature of our statistics will be very helpful.
Additionally, as we ultimately aim to make contact with observation, there are several complications that must be incorporated, including window functions, survey geometry, and more. We look forward to reporting on progress in this direction in future work.
Acknowledgments
We thank Daniel Baumann, Dan Green and Pierluigi Monaco for fruitful discussion on smallscale non-linearities. We thank Cora Uhlemann for useful comments and suggestions on a draft. M.B. acknowledges support from the Netherlands Organization for Scientific Research (NWO), which is funded by the Dutch Ministry of Education, Culture and Science (OCW) under VENI grant 016.Veni.192.210. M.B. also acknowledges support from the NWO project "Cosmic origins from simulated universes" for the computing time allocated to run a subset of the Eos simulations on Cartesius, a supercomputer which is part of the Dutch National Computing Facilities. Additionally, a subset of the Eos simulations on Cartesius and its successor Snellius were run under the project "Topological echoes of primordial physics in cosmological observables." A.C. was partially funded by the Netherlands eScience Center, grant number ETEC.2019.018. J.C. is supported by ANID scholarship No. 21210008. L.C. acknoledges support from ANID scholarship No.21190484 and "Beca posgrado PUCV, término de tesis, 2021". J.N. is supported by FONDECYT grant 1211545, "Measuring the Field Spectrum of the Early Universe".
A.1 Convergence tests
Since we estimate the components of eq. (4.3) numerically, we must check that our results are converged with respect to simulation volume. As a heuristic for convergence, as in [94], we may examine the ratio of the uncertainty σ θ (N ) on each parameter θ for varying number N of realizations over the uncertainty σ θ (N = 120) for the maximum number N = 120 of realizations. We consider convergence with respect to the number of simulations used for estimation of the covariance matrix N cov and data derivative N der separately. In Figure 11 we show the convergence of marginalized constraints. We see that fluctuations with respect to N cov affect the forecasted constraints within a few percent. On the other hand, and as in [94], σ grows monotonically with N der . However, this growth flattens, and at 60% of the available simulation volume the results remain within 10% of their final values. This signals that if we had access to more simulation volume, the final results would not change by very much. We can heuristically extract extrapolated constraints via the ansatz σ(N ) = σ ∞ − 2 −(N −N 0 )/τn . Performing this check, we find that our forecasts are stable within 20%.
A.2 Large Non-Gaussianity
As observed in [17] in the case of local non-Gaussianity, the non-Gaussian contribution to topological statistics scales nonlinearly with f NL . In particular, the scaling appears to be sublinear, so that using large values of f NL to estimate the data derivative underestimates the constraining power of our topological summaries. On the other hand, since the signal is quite large, these results converge more quickly with respect to simulation volume. We show the Fisher ellipse when data derivatives are estimated using f equi NL = −1000 and f loc NL = 250 in Fig. 12. We observe that marginalized constraints degrade by a factor of ∼ 2.7. We note the consistency of this factor via the following investigation. First suppose that the scaling of the difference in data vector goes as (d f − d 0 ) ∼ f x . In this case, x can be computed by performing the minimization argmin y ||(d F − d 0 ) − y(d f − d 0 )||. Rearranging, one has that x = − ln(y) ln(f /F ) . Here the final result depends on the convention for the norm ||.||. When using C −1 to compute the norm, more attention is given to bins with smaller variances. Using a Euclidean norm, more attention is given to bins with larger means. We observe that the lower variance bins are more dramatically sublinear. For definiteness, consider local non-Gaussianity. Using C −1 for the norm, we find y ∼ 6, while for a Euclidean norm, we find y ∼ 18. There is a similar behavior when comparing f equi NL = −30, −1000. These lead us to expect that σ 250 /σ 10 ∈ [1.4, 4.2] -consistent with the results in Fig. A.2. We anticipate that more detailed modeling of the behavior of our topological statistics will enable greater simulation-efficiency in future studies. | 2022-03-17T01:16:23.707Z | 2022-03-15T00:00:00.000 | {
"year": 2022,
"sha1": "4e2746ea81746fa2f10a3a958c3a45601dc35bff",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "12df25c150364aa795bc94b364449ce7e199e3ba",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
210489882 | pes2o/s2orc | v3-fos-license | Identification of senior high school student’smisconceptions in makassar city on cell concepts by using the certainty of response index (CRI) method
This study aims (i) to determine the level of understanding of high school students on the concept of cells (ii) to identify which basic competencies of the concept of cell students who experience misconceptions. This research is descriptive research. The method used to identify misconceptions that occur in students is the Certainty of Response Index (CRI). The sample in this study were students from 8 senior high schools in the city of Makassar. The instrument used in the form of a reasoned multiple choice diagnostic test equipped with CRI values and structured interviews to determine the causes of misconceptions.The results showed that high school students in the city of Makassar experienced misconceptions on the 6 Basic Competencies about the concept of cells which were studied with the tendency of students to experience more misconceptions in Basic Competence 3.2 to compare transport mechanisms in membranes (diffusion, osmosis, active transport, endocytosis, and exocytosis) from observations and Basic Competence 4.2 experiment with diffusion and osmosis by using potato tubers or spinach or salted stem and linking them to trans membrane transport events
Introduction
The Biological Challenge of the 21st century requires students to integrate concepts at the organizational level to a more complex level in the learning process in the classroom [1]. so that this knowledge should be well understood.
The concept is used as a basis for thinking to solve problems in the learning process. Sometimes concepts that are deviated even conflict with existing scientific concepts. This results in barriers to the acceptance of new concepts that will be studied, understanding concepts that are different from the scientifically accepted concept is known as misconception [2]. Misconceptions are also referred to as erroneous ideas [3]. If this condition occurs, it should immediately be overcome because misconceptions are also factors that influence learning. Misconception can be obtained before entering school or can be triggered in the formal stage of education that is being undertaken.
In the field of biology, many studies have reported misconceptions on several concepts including the concepts of vertebrates and invertebrates [4], the concept of cell structure and function [5], the concept of photosynthesis [6], the concept of transportation systems and excretion systems [7], the concept of diffusion and osmosis [8], the concept of genetics [5], the concept of protein synthesis [3], [5], on the concept of evolution [5] and misconceptions on the concept of cell metabolism [9].All If the concept of cell occurs misconception then it is certain that misconception will develop in other material, considering the concept of cells is basic and very closely related to other material in biology. Misconceptions on basic concepts will lead to difficulties in connecting one concept to another.
In addition to misconceptions derived from textbooks, misconceptions possessed by students can be obtained from the results of the learning process in the classroom, in other words can be sourced from the teacher. It has also been shown that teachers can play a role in the formation of misunderstandings held by their students [10] [11]. Furthermoreargues that, assessment strategies used by biology teachers can be factors that influence the development of misunderstandings [10]. on their students. Teachers can be a source of many misconceptions held by students [12]. Furthermore, with the results which explains that if the teacher is wrong in understanding and giving an explanation of the concept in the learning process, then students will also accept the wrong concept [13]. Furthermore reported that there were misconceptions in high school teachers in the city of Makassar Indonesia, as many as 48, 30% experienced misconceptions, 49.10% who understood the concept, and only 10.77 % who do not understand the concept [14]. Most likely misconceptions also occur in students.
One way that can be used to find out whether someone is experiencing misconceptions or not is to use the Certainty of Response Index (CRI) method. CRI is a diagnostic test in the form of multiple choice questions with a combination of the level of correctness of the selected answers [15].
Method
This research is a descriptive study that describes the misconception of the concept of high school students in the city of Makassar. This research was carried out in the State High School in the city of Makassar from April 2018 to November 2018. As for the subject in this study were Makassar city high school students who had learned cell concepts.
The population in this study were all high school students in class XI in high school in the city of Makassar. As for the sample (subject) of this study are students of class XI in high school who have learned the concept of cells. The selection of research subjects was done by purposive sampling technique. Aiming because the student sample was obtained from school data that had previously been known that there was a misconception on the biology teacher from 8 high schools in the city of Makassar.
This study uses research instruments in the form of reasoned multiple choice tests to determine students' misconceptions and interview guidelines to obtain supporting data on misconceptions that occur in state high school students in the city of Makassar. Data collection techniques used in this study are interview techniques (non-test) using interview guidelines and measurement techniques (tests) using diagnostic test instruments in the form of reasoned multiple choice tests. Students are asked to fill in the level of confidence in the form of a scale of 0 to 5 on the answers given to the questions posed on the questions and give reasons for the answers. Interviews are aimed at obtaining supporting data on misconceptions that occur in high school students in Makassar City. The test instrument that has been made, tested the validity with two expert validators in the field of cell biology and evaluation. The instrument is then tested on a different sample and each item is analyzed with the ANATES V4 program to determine the value of validity, reliability, differentiation and level of difficulty. Data analysis techniques to identify misconceptions, know the concept and do not know the concept using the CRI (Certainty of Response Index) method.
Reserach Result
The results of data analysis on the level of understanding of high school students in Makassar City on the concept of cells using the CRI method can be seen in The description of the description of Figure 3.1 can be seen in Table 3.1 about the level of understanding of students in Makassar City based on the results of diagnostic tests on 6 Basic Competencies (BC) about cell concepts.
Discussion
The results showed that there were misconceptions of high school students in Makassar City on the concept of cells of 31.98%. This value is almost comparable with the level of understanding of students who do not understand the concept of 49.80%. while those who understand the concept are only 18, 22%.Misconceptions that occur in students can be caused by various factors including books that are used and can also be sourced from teachers, such as the research previously known that there were also misconceptions on teachers in schools where data was collected [14]. Misconceptions that occur in students in more detail can be explained in the following explanation, Level Based on the results of the study it was also known that the percentage of students who understood the highest concept were in items number 1, 4, 7 and 10, which were 7.28%, 7.36% 6.74%, and 7.63. Questions number 1, 4, and 7, are questions of BC number 1 translation, which describes the chemical components of cells, structures, and cell functions as the smallest units of life, identifying the cell organelles through observation. While the item number 10 becomes the most understood problem for students because when the teacher delivered this material assisted learning media in the form of pictures, so that students more easily remember the descriptions and functions of the structure. The results also show that item number 16 has a high percentage of students who understand the concept, which is 5.50%.
Not Understand.
Based on the results of the study it was found that the percentage of students in Makassar City who did not understand the concept had the highest percentage compared to the other two categories of understanding, namely 49.80%. The results also show that the percentage of students who do not understand the concept occurs most in item number 19 material, from the results of the analysis of the problem is also a difficult problem. This material is material related to the process while in most cell biology material books focuses more on structure, this is in line with the findings of several researchers that cell biology, as introduced in the school curriculum, focuses primarily on structure not on process, even though understanding of the process biology has been recognized as important for a comprehensive understanding of biological systems [16], [17]. The interpretation of someone who does not understand the concept is based on the right answer but the CRI value is low or the answer is wrong and the CRI value in selecting the answer is also low [18].
Misconception.
Based on the results of the study it is known that misconceptions on cell concepts are found in all the basic competencies that researchers carefully research (Table 3.1). The highest percentage of misconceptions is in BC number 4 (49.47%) and the lowest percentage of misconception is in BC number 3 (18.05%). a. Students' misconceptions on basic competence 1 (BC. 1) BC 1 consists of 6 indicators and is divided into 9 items (Table 5.1). Based on question number 1 based on the results of the study it was found that there were 5.39% of students from all students who experienced misconceptions on the concept of cells. Some students both believe that a compound that is not a constituent component of cells is H 2 O and some other students believe that phospholipids are compounds that are not constituent components of cells. The actual concept according to Campbell of question number 1 is a compound that is not a constituent component of cells is lactic acid, because lactic acid compounds are respiration products anaerobically in certain fungal and bacterial cells for utilization in the milk processing industry [19].
The misconception that occurs in students is supported by the statement that says that the material of the cell structure is material that is considered difficult. While the other three subjects did not consider the material in BC number 1 difficult to learn. From the results of interviews and analysis of previous research, information was obtained that teachers who taught at these schools also experienced misconceptions on the material. The misconception that occurs with the teacher can be caused by the knowledge gained in college. The long period of lecture allows the retention of knowledge that occurs to them. This is supported by the statement of Murni which explains that misconceptions obtained from previous education will remain in someone [20]. According to Naz if misconceptions in a person are not converted into a true understanding of the concept, it will remain in them [21] Based on the results of the study it is known that in BC number 1 the most misconceptions occur in number 8. In the item number 8 the subject of the researcher was asked to identify which of the answer choices were not mitochondrial functions. most students in Makassar City experience misconceptions for the material function of cell organelles. The students' concept of mitochondrial function, namely lipid synthesis, is correlated with problem number 9, namely the golgi aparatus has a function as a place for lipid synthesis, a place for carbohydrate synthesis, and as a place to sort and distribute products from the endoplasmic reticulum, as well as play a role in the packaging process secretions released into cells. Students are not able to distinguish mitochondrial function and golgiaparatus. It can be seen from the percentage contribution of misconception that is 5.76% for questions number 8 and 4.00% for question number 9. Material characteristics that are tested in problem number 8 need reasoning because they are related to the function of cell organelles. In line with this, explained that the lack of reasoning for the material being studied can lead to misconceptions in a person [22] b. Students' misconceptions on basic competence 2 (BC 2) Basic competence number 2 relates to the transport process on the membrane of the cell membrane (diffusion, osmosis, active transport, endocytosis, and exocytosis). Based on the results of the study there is a misconception in all items related to BC number 2. The highest misconceptions occur in questions number 11 and 12 of 7.04% and 5.44%. The question in item 14 is asking which of the following statements is incorrect from the exocytosis process. Some students answer questions with wrong answers and are very confident, that is there are those who answer that is not true related to the process of exocytosis is the process of transporting macromolecules out of cells, vesicles which are transported out of cells formed from the golgiaparatus, the compounds transported in the exocytosis process can in the form of protein, and compounds transported in the exocytosis process can be polysaccharides. This shows that students experience misconceptions. The correct concept according to Campbell exocytosis is the process of transporting macromolecules (can be proteins and polysaccharides) from inside cells out of cells and vesicles which are transported out of cells formed from the golgiaparatus [19]. Besides that it also happened in items 13 and 14.
Based on the analysis of the level of difficulty items number 13 and 14 are in the category of questions that are very difficult and difficult. The level of difficulty of the questions that are in the category is difficult and very difficult causing students to need high reasoning for the concept being tested. Based on the results of interviews there are many things that support the high misconception of students in BC number 2, including the learning resources used by students in the form of school printed books whose material content does not specify in detail so that it does not give a thorough understanding for students.
c. Students' misconceptions on basic competence 3 (BC 3)
In the basic competency number 3 it is known that there are 18.05% of students in Makassar City who experience misconceptions and contribute as much as 2.29% of all students who experience misconceptions on the material concept of cells. BC number 3 is translated into 1 indicator with 1 item that is item number 10. Misconceptions contained in item number 10 are two, namely: 1) students believe that the structure of phosphate in the cell membrane layer is hydrophilic, while the lipid part is hydrophobic, 2) students believe that both the phosphate and lipid parts of the cell membrane structure are not hydrophilic and hydrophobic. The real concept according is that the phosphate portion of the lipid bilayer on the cell membrane is hydrophilic while the lipid portion is hydrophobic [23]. The misconception that occurs in item number 10 is caused because students who experience misconceptions cannot reason well with the concept of cell membrane structure.
d. Students' misconceptions on basic competence 4 (BC 4) BC number 4 is contained in one item, item number 15. There are two types of misconceptions on question number 15, namely: 1) The addition of the length of the potato put into hypotonic solution because the potential of water in the potato cell is higher than the water potential in solution, 2) Addition of the length of the potato put into hypotonic solution because the concentration of the solution is higher than the concentration of the solution in the potato cell.
The real concept is that the potato slices put into hypotonic solution, causing the addition of potato lengths from the previous size due to the potential for hypotonic dissolved water to be higher than the potential of water in the solution of potato cells causing water to move into the potato cell. This is called osmosis [19].
Item number 15 is a question item based on the application of the concept of cell osmosis in relation to real life. The misconception of students who cannot distinguish the concept of osmosis in hypotonic, hypertonic, and isotonic solutions can be obtained from the misconceptions held by these students in applying it in real life. This is in line with statement which explains that misconception can occur as a result of errors in interpreting natural phenomena in daily life [24]. According misconceptions can be obtained from natural phenomena in surrounding life [25]. In addition, this also happens because of inappropriate use of terminology, students often cannot distinguish hypotonic, hypertonic, or isotonic. According to misconceptions can occur if the mastery of subject knowledge is inadequate and inappropriate terminology is used [7]. e. Students' misconceptions on basic competence 5 (BC 5) BC number 5 is discribed into four items. Based on the results of the study it is known that there are several different student understanding with the understanding of the real experts. The results showed the high misconceptions that occurred in students in the city of Makassar on item number 19. Item number 19 includes items that have difficulty in the very difficult category. The type of misconception that occurs in students is that students believe that differences in the amount of ATP produced in aerobic respiration and anaerobic respiration can occur due to: 1) At the stage of glycolysis anaerobic respiration occurs complete decomposition of carbon compounds, 2) All stages of reaction in aerobic respiration are produced in the form of energy ATP, 3) In aerobic respiration, the results of glycolysis will immediately enter electron transport which produces large amounts of ATP, and 4) In the stage of aerobic respiration glycolysis NADH, FADH, and ATP are produced in large numbers when entering electron transport.
The actual concept is the difference in the amount of ATP produced in aerobic and anaerobic respiration can occur because most of the energy in anaerobic respiration is stored in the final compound in the form of ethanol or lactic acid. The glycolysis stage only produces 2 molecules of pyruvic acid, 2 ATP, and 2 NADH, no FADH molecules are produced at the glycolysis stage. Not all stages of aerobic respiration produce ATP because the oxidative decarboxylation reaction stage does not produce ATP [19] Based on the results of data analysis conducted by researchers, the misconception that occurred in students in Makassar City on item number 19 was due to students' reasoning about lack of metabolic material. Students must analyse in more detail which choice of answer is right for the question. Characteristics of metabolic material that requires mastery of the concept of chemistry also become an obstacle for many students to experience misconceptions. Learning concepts in biology requires subjects who study them to study thoroughly about interrelated concepts because one concept can be the basis of knowledge for other concepts [4]. The concept of biology which is mostly related to concepts in other fields both chemistry and physics causes one who studies biology must also have knowledge in the field of chemistry and physics. For example, the metabolic process is based on chemistry. This is the basis for the high percentage of misconceptions in students in Makassar City on metabolic material.
f. Students' misconceptions on basic competence 6 (BC 6) BC number 6 is translated into 6 items. Question items number 21 and 25 are items that have the most misconceptions, which are 4.43% and 3.68% respectively.
Question number 21 relates to the identification of cell images that have one of the stages of division. There are two types of misconceptions: some students believe that the image is a metaphase phase marked by chromosomes in the cell equator. Whereas another misconception is that students believe that the image is an anaphase II phase, that is, chromatids, you separate and move toward the cell poles. The same concept is a stage of anaphase, the phase where chromatid is separated and moves toward the cell poles [23]. Item number 25 is one of the questions that has the highest percentage of misconceptions compared to other items. Material characteristics that require good understanding are assumed to be the cause of students not being able to answer the questions correctly.
The types of misconceptions that occur in item number 25 include: 1) some students believe that the number of chromosome sets in ootid is not the same as the number of chromosome sets in secondary spermatocytes, 2) some students in Makassar City believe that spermatogenesis produces 4 puppies functional whereas oogenesis produces more than one functional ovum, 3) some students believe that myosis I does not produce secondary oocytes and secondary spermatocytes, 4) and some students believe that spermatogenesis does not occur in the testes and oogenesis does not occur in the ovary. The correct concept according is the number of chromosome sets in ootid is the same as the [19]. It is known that the process of chromosomal reduction only occurs in myosis I whereas chromosomal reduction does not occur in myosis II, so that the results of myosis I in the form of secondary oocytes and the results of ootidmyosis II have the same chromosome number.
Conclusion
The conclusion in this study is the understanding of high school students in the city of Makassar on the concept of cells is 18.22% of students who understand the concept of cells, 49.80% who do not understand the concept of cells, and 31.98% who experience misconceptions in the concept of cells. Misconceptions occur in all basic competencies (BC 1-6) which are tested on cell concepts, which are 32.53%, 38.71%, 18.05%, 49.47%, 27.76% and 25.38% respectively. | 2019-11-14T17:10:27.858Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "e4709ed49c3205fb2744e6f09add6018e84920dc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1317/1/012194",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5d26a1ee7df5c589c50b769b2b13dac072cc88c6",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
91324764 | pes2o/s2orc | v3-fos-license | Laboratory evaluation of toxicity of spinosad tablets and tracer 48 SC insecticides against different stages of American cockroaches ( Periplaneta americana L . ) , in Jeddah governorate
Periplaneta americana is an important household insect pest worldwide and acts as a mechanical vector and reservoir for pathogenic agents. Fermented insecticides are biopesticide, derived from fermentation by the soil-dwelling actinomycete. The aim of this study was to test the susceptibility of Spinosad tablets and Tracer 48 SC against P. americana adults and nymphs using different concentrations. Bioassays were done by feeding and contact toxicity methods. Mortality was recorded after 48 hours of exposure. Mortality data from the replicates was assessed by probit analysis. All tested insects showed high susceptibility for spinosad compared with the control. The effectiveness of fermented insecticides against susceptible different stages of P. americana showed that these formulations can be strongly effective for the control of P. americana.
Introduction
American cockroaches (Periplaneta americana) (Linnaeus), order Dictyoptera, suborder Blattaria is an important insects in medical [1], they are the most notorious pests, found in kitchens [2].It is one of the largest common cockroach species [3].Out of 500, 30 species are considered as household pest [4].A number of cockroaches pests live in/or around homes, and they are omnivorous scavengers [1].They survive in warm weather with high moisture conditions as well as in unfavorable environments for humans [5].
P. americana can spread bacteria, fungi, and other pathogenic microorganisms from infected areas [6], and cause allergies to human [7].They play important role in the transmission of different diseases by mechanical and biological ways [8,9].P. americana spends most of its time in sewage, sewer pipe which usually contains high density of pathogens [10].Also, they feed on garbage and they have large opportunities to disseminate human pathogen [11,12].In addition, their nocturnal and filthy habits of eating their feces make them ideal carriers of numerous pathogenic microbes [13].
A number of fungal species have been isolated from both the external body parts as well as faecal of cockroaches such as Candida spp., Rhodotrula spp., Aspergillus spp., Fusarium spp., Penicillium spp.and Geotrichum spp.were appeared on external surfaces of cockroaches.Other medically important mold, Alternaria spp., Cladosporium spp., Trichoderma spp., Mucor spp.and Chrysosporium spp.have been isolated from a few cockroaches, Chrysosporium glabrata and Chrysosporium albicans were the highest species isolated from cockroaches.Chrysosporium parapsilosis and Chrysosporium guilliermondii were present on the external surfaces in a few cockroaches.In addition, three species of Aspergillus have been identified via molecular characterization.Aspergillus niger was common and frequently isolated species from cockroaches.However, A. fumigatus and A. flavus were isolated from the external surface a few cockroaches.A total of 6 samples were found to carry two species of Aspergillus on their external surfaces [20,21].
The control of P. americana can be done by applying insecticides to the hiding and resting places in the form of insecticidal dusts and residual sprays.Chemical control has been the most popular and effective method so far [27], but their control as insecticides is not a suitable because of several reasons; the most important of which is that they may develop resistance against certain frequently used insecticides [28].Conventional or non-conventional insecticides were used against P. americana exhibited a high efficiency in controlling the insect pest [29].
Biological insecticides such as microbes, do not pose a disease risk to wildlife, humans, and other organisms not closely related to target insect [30,31,32].Several new chemical substances with low mammalian toxicity have been evaluated for this purpose in several parts of the world, aiming to gradually replace the use of conventional insecticides, such as the organophosphorates (OPs) [33].For instance, some pyrethroids have been successfully used as alternatives to OPs [34].Spinosad, which is based on the metabolites from the actinomycete Saccharopolyspora spinosa, appears to be one of the most promising new grain protectants [35].Spinosad has low mammalian toxicity and acts on the insects' nervous system, by ingestion or contact [34].
Spinosad is microbes that can be fermented to produce an insecticide such as abermectins, a fermented product of Streptomyces avermitilis used in baits for household insect pests.The best-known home gardening product of this type is spinosad.It composed of spinosyns A and D. The fermented product is very toxic to caterpillar pests such as cabbageworm, cabbage looper, diamondback moth, armyworm, and cutworm, as well as fruit flies such as spotted wing drosophila.Spinosad can act on a susceptible insect's stomach and nervous system.It is primarily ingested by feeding insects but can have some efficacy when sprayed directly on insects.Affected pests cease feeding and undergo partial paralysis within minutes upon exposure to spinosad, but it may take up to two days for the insects to die [36].It has low toxicity to many beneficial insects that prey on pests and is nontoxic to mammals and other vertebrates, with the exception of some fish.Because it is selectively toxic for many pest species and relatively safe to non-target species, spinosad has become highly desirable as an organic insecticide.Spinosyn A affect because it is effect on the involuntary muscle contractions and tremors by widespread excitation of neurons in the central nervous system and caused excitation when applied directly to isolated insect ganglia at submicromolar concentrations.Prolonged spinosyninduced hyperexcitation resulted in paralysis that was associated with neuromuscular fatigue [36].
Much of the work regarding insecticidal efficacy has been done on B. germanica, however, very little data is available with respect to P. americana.Therefore, keeping in view the work carried out by various researchers, the present work was designed to investigate the insecticidal efficacy of formulations of fermented insecticides against P. americana, and the susceptibility of different stages to these insecticides through laboratory bioassay using feeding and contact toxicity methods.
Experimental insect
P. americana was collected from dark and damp places (sewers) from different areas in Jeddah province by using food jars surrounded by dark cloth as a trap [37].The strains were stored in the lab and used in this study.Traps were placed into main sewers.Cockroaches were collected every two days and placed in glass containers (30 × 60 × 30 cm).Then, they were thus kept under the laboratory condition of 25 ± 3 °C and 75 ± 5% RH.
Chemicals
The present study was designed to investigate the insecticidal efficacy of two different fermented insecticides: Spinosad tablets and Tracer 48 SC.The choice of these formulations was based on the fact that those chemicals have not been tested against different stages of P. americana in Jeddah governorate so far.All chemicals were obtained from Machinery & Agricultural Materials Co., Ltd, Jeddah, Saudi Arabia.
Insecticides were tested against P. americana adult and nymphs by feeding and contact toxicity methods, different concentrations were prepared and mortality percentages were recorded after 48 h.
Feeding bioassay
Feeding bioassay was done according to [38], with some modifications against adults and nymphs.For the present study, Bait was improvised in the laboratory.Feeding bioassays were conducted with lab strains using previous plastic boxes coated with petroleum jelly 2 cm from the inside top to prevent the cockroaches from escaping. 1 gm of white floor, 1gm of powder milk, 1 gm of sugar were prepared manually and treated with different concentrations of insecticides and appropriate amount of water to make semisolid bait.A single pellet was large enough to be entirely eaten by adults or nymphs starved for 24 h.Treated pellets were dried in a fume hood for 15-20 min.A single pellet was then provided to adults and nymphs held in approximately 3-4 gm.Control insects received treated pellets only with water.Each replicate consisted of 30 insects and three replicates for each concentration.Mortality was assessed at 48 h.
Contact toxicity bioassay
Contact toxicity bioassay was done according to [39], with some modifications against adults and nymphs.Contact toxicity mixture was improvised in the laboratory.Contact bioassays were conducted with previous method.Liquid mixture was then conducting by spraying different concentrations of the insecticide from inside plastic box and make sure that the insecticide covered all the sides.Three plastic boxes with 30 cockroaches (adults and nymphs) were used for each concentration.
Statistical analysis
This study was completely randomized design (CRD) in a factorial experiment.The data were statistically analyzed using analysis of variance (ANOVA).LC50 and LC90 were calculated according to Probit analysis program [40].All Malformations were captured using digital camera.
Results
Feeding bioassay of Spinosad tablets was resulted in Table 1 and Fig. 1 against P. americana adults and nymphs after 48 h., of exposure periods and the results shows that low concentrations exhibited high mortality to adults and nymphs (100.00%) at the highest concentration of 5% and 3%, respectively.The nymphs were more sensitive to Spinosad tablets by LC50's values (0.019%) followed by adults (0.065%) after 48 h., for contact toxicity bioassay with Spinosad tablets, there was positive correlation between mortality of Spinosad tablets concentrations and exposure intervals.
Mortality of adults and nymphs resulted after feeding bioassay with Tracer 48SC was summarized in Table 2 and Fig. 2. Mortality percentage were highly increased by increasing concentrations at all exposure intervals for adults and nymphs.After 48 h., Tracer 48SC gave high level of mortality to adults and nymphs (96.66%) at low concentrations.In the susceptibility level of nymphs and adults of P. americana after 48h., the nymphs were more sensitive to Tracer 48SC by LC50 values (0.068%) than adults (0.097%) after 48h.For contact toxicity method, also there was positive correlation between mortality of Tracer 48SC concentrations and exposure intervals.LC50=lethal concentration that kill 50% of the treated insects, LC90= lethal concentration that kill 90% of the treated insects, U: upper limit, L: lower limit, * X 2 = Chi square, When tabulated (Chi) 2 larger than calculated at 0.05 level of significance indicates the homogeneity of results.LC50=lethal concentration that kill 50% of the treated insects, LC90= lethal concentration that kill 90% of the treated insects, U: upper limit, L: lower limit, * X 2 = Chi square, When tabulated (Chi) 2 larger than calculated at 0.05 level of significance indicates the homogeneity of results
Discussion
The modes of action of fermented insecticide have not been documented, but it kills a wide range and variety of insect pests when it ingested or topically applied to control [41], and they exhibited broad-spectrum activity against insect species in different orders, especially Lepidoptera and DipteraIn [42].In our finding, spinosad tablets and Tracer showed high mortality for both adults and nymphs with continuous nervous effects.Similar to our results, [43], showed that the spinosad exhibits a high level of toxicity with a dose-response relationship, and the determined LC50 revealed a neurotoxic activity of spinosad.The results match those observed in earlier studies by [44], who mentioned that insecticidal spinosyns have potent effects on the function of the GABA receptors of small-diameter cockroach neurons and can elicit a small-amplitude Cl − current.In another study, [33], examined the insecticidal effect of spinosad, against adults of the lesser grain borer, Rhyzopertha dominica, the rice weevil, Sitophilus oryzae, the confused flour beetle, Tribolium confusum Jacquelin on wheat and the larger grain borer, Prostephanus truncatus on maize and he found that R. dominica and P. truncatus were very susceptible to spinosad, followed by S. oryzae, while T. confusum was the least susceptible.The insecticidal mode of action of fermented insecticides are not completely understood, but is considered unique in comparison with other insecticides.It has been demonstrated their interaction with g-aminobutyric acid receptors and nicotinic acetylcholine receptors, eventually leading to the disruption of neuronal activity and consequent insect paralysis and death [45].Our selective insecticides have not been used before in the control of different stages of P. americane in Jeddah governorate, but other formulations from the same class have been tested against other insects in other areas.Spinosad or Success ® is the first member of the fermented insectcides, and it was first introduced for control of diamondback moth (DBM), Plutella xylostella, in Asia [46].Spinosad appeared to be effective against the pest on aubergines.In field tests on onion, lambda-cyhalothrin and fipronil were highly effective on T. tabaci.The effect of spinosad on thrips in cotton was studied by [47].In other study, field trial on efficacy of spinosad against vegetable pests was conducted by [48], they found that the foliar application of spinosad can control thrips in leeks and salad onion as well as caterpillar pests in head and flowering brussels sprouts.[49], investigated the bioefficacy of eight different insecticides based on four active substances in 2002 in Slovenia and found that spinosad and abamectin exhibited the highest efficiency against T. tabaci.[50], evaluated some newer insecticides for control of pomegranate fruit borer at Mahatma Phule Krishi Vidyapeeth, Rahuri (MS), India, by spraying spinosad 45 SC and it was effective.[51], conducted a field experiment at Dharwad, Karnataka, India to evaluate the efficacy of different insecticides against sucking pests of okra and they found that spinosad 45 SC was the most effective against thrips.
Conclusion
The present study revealed that Spinosad Tablets and Tracer 48SC were toxicity and can be used as a biological control against P. americana adults and nymphs in Jeddah governorate.
Figure 1 Figure 2
Figure 1 Susceptibility adults and nymphs of P. americana to Spinosad tablets using feeding and contact toxicity methods after 48 h
Table 1
Susceptibility adults and nymphs of P. americana to Spinosad tablets using feeding and contact toxicity methods after 48h
Table 2
Susceptibility adults and nymphs of P. americana to Tracer 48SC using feeding and contact toxicity methods after | 2019-04-03T13:09:59.944Z | 2019-01-30T00:00:00.000 | {
"year": 2019,
"sha1": "3e01c1ea0419c3d95b439d719f912feb531f46bf",
"oa_license": "CCBY",
"oa_url": "https://gsconlinepress.com/journals/gscbps/sites/default/files/GSCBPS-2018-0163.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3e01c1ea0419c3d95b439d719f912feb531f46bf",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
16353544 | pes2o/s2orc | v3-fos-license | Development of Elvitegravir Resistance and Linkage of Integrase Inhibitor Mutations with Protease and Reverse Transcriptase Resistance Mutations
Failure of antiretroviral regimens containing elvitegravir (EVG) and raltegravir (RAL) can result in the appearance of integrase inhibitor (INI) drug-resistance mutations (DRMs). While several INI DRMs have been identified, the evolution of EVG DRMs and the linkage of these DRMs with protease inhibitor (PI) and reverse transcriptase inhibitor (RTI) DRMs have not been studied at the clonal level. We examined the development of INI DRMs in 10 patients failing EVG-containing regimens over time, and the linkage of INI DRMs with PI and RTI DRMs in these patients plus 6 RAL-treated patients. A one-step RT-nested PCR protocol was used to generate a 2.7 kB amplicon that included the PR, RT, and IN coding region, and standard cloning and sequencing techniques were used to determine DRMs in 1,277 clones (mean 21 clones per time point). Results showed all patients had multiple PI, NRTI, and/or NNRTI DRMs at baseline, but no primary INI DRM. EVG-treated patients developed from 2 to 6 strains with different primary INI DRMs as early as 2 weeks after initiation of treatment, predominantly as single mutations. The prevalence of these strains fluctuated and new strains, and/or strains with new combinations of INI DRMs, developed over time. Final failure samples (weeks 14 to 48) typically showed a dominant strain with multiple mutations or N155H alone. Single N155H or multiple mutations were also observed in RAL-treated patients at virologic failure. All patient strains showed evidence of INI DRM co-located with single or multiple PI and/or RTI DRMs on the same viral strand. Our study shows that EVG treatment can select for a number of distinct INI-resistant strains whose prevalence fluctuates over time. Continued appearance of new INI DRMs after initial INI failure suggests a potent, highly dynamic selection of INI resistant strains that is unaffected by co-location with PI and RTI DRMs.
Introduction
HIV-1 integrase inhibitors (INI) are a relatively new class of antiretroviral (ARV) medications that function by preventing strand displacement and integration of the HIV-1 provirus into the host cell genome [1]. Raltegravir (RAL) was the first US FDA approved INI, and has demonstrated significant antiviral activity in ARV experienced and naïve patients when combined with other ARV classes [2,3]. Elvitegravir (EVG) and dolutegravir (DTG) are INIs in clinical development and demonstrate comparable virologic activity in clinical trials [4][5][6][7].
Although INI drug-resistant mutations (DRMs) have rarely been described in ARV naïve, or INI naïve ARV experienced patients using conventional technologies [8,9], virologic failure on INI-containing regimens has been described, and DRMs in the HIV-1 integrase (IN) coding region conferring phenotypic loss of susceptibility to these agents has been documented and reviewed elsewhere [10]. However, some IN DRMs confer resistance to several INIs (e.g., Q148HRK reduces RAL, EVG, and DTG susceptibility), others to some but not all INIs (e.g. N155H reduces susceptibility to RAL and EVG, but not DTG), and more than one pathway leading to INI resistance has been described (e.g., N155H, Q148HRK, or Y143RC for RAL resistance) [11]. Mutant strains have also been described in vivo from clinical isolates and by site directed mutagenesis where multiple DRMs on the same virus strand (N155H + E92Q), or addition of accessory mutations (Q148H + G140S), result in significantly greater loss of susceptibility [10]. In addition, certain INI DRMs result in a loss of viral fitness or replication capacity [12][13][14], and disappearance of INI DRMs after RAL discontinuation with resultant increase in RC has been described [15,16], thus demonstrating the dynamic nature and complexity of INI resistance development.
Current commercial genotypic resistance assays generally use population sequencing to identify resistance to HIV-1 reverse transcriptase (RT), protease (PR) inhibitors and INIs by generating at least two separate amplicons (one for PR-RT, and one for IN). These assays cannot determine whether several INI DRMs occur on the same viral strand, evolve independently, or are present at low frequencies. Newer technologies, such as next generation sequencing (NGS) or parallel allele-specific sequencing (PASS), improve on the sensitivity of population sequencing by being able to detect low frequency variants in INI naïve and experienced patients [17,18]. However, these assays cannot establish linkage between integrase inhibitor (INI), reverse transcriptase inhibitor (RTI), and protease inhibitor (PI) DRM because of the technical challenges of this analysis due to the length of sequence that must be interrogated. It is thus desirable to have a single amplification/ amplicon generated during RT-PCR that can be used ''universally'' to genotype newer HIV-1 pol gene targets (e.g. RNase H or connection domain) as well as to understand the co-linkage and evolution of DRMs, and multiple polymorphisms and their role on resistance pathways among the three target functional enzymes. Although
Results
Ten EVG-treated patients and 6 RAL-treated patients were studied ( Figure 1). Because the EVG-treated patients were part of a clinical trial, samples from serial time points were available for analysis. Approximately 5 time points per patient, ranging from 2 to 48 weeks of EVG treatment, were analyzed, and an average of 21 clones per time point (1120 total EVG clones) were generated. Only single, failure time points were available for the RAL-treated patients, and an average of 26 clones were analyzed from this group (157 total RAL clones).
PCR-mediated Recombination
Mixtures of patient-derived plasmid clones were prepared, amplified, cloned, and approximately 30 clones per mixture were analyzed for the frequency of PCR-mediated recombination. PHI tests and Simplot analysis showed that among the 8 plasmid mixes tested, 4 mixes showed no significant recombination in the 664-bp IN coding region (p.0.05), while the 4 mixes that did show significant evidence of recombination (p,0.05) had only 1-2 recombinant clones per sample, producing an overall average of 0.75 recombination events per sample. When analyzing recombination across the PR/RT coding region, 3 of 8 plasmid mixes showed significant evidence (p,0.05) of recombination (1-3 recombinant clones per sample), with an overall average of 0.88 recombinants per sample. To investigate the frequency of recombination between the PR/RT and IN coding regions, the clone sequences (PR/RT and IN) were concatenated, and recombination analysis performed. Results showed that there was an average of 3 recombination events per patient (from a mean of 30 clones) that occurred between the RT and IN coding regions. Figure 1 shows the consensus PI, NRTI, and NNRTI DRM profile for each patient studied. All patients were infected with HIV-1 clade B strains. Because of the high degree of ARV experience of the patients studied, the PI and RTI DRM profiles were highly homogeneous (mean = 92%, range 40%-100%) within each patient's clones at each time point evaluated. Most patient's HIV strains exhibited DRM profiles consistent with highlevel resistance to at least 2 drug classes. The EVG-treated patients had an average of 3.3 PI (range 0-6), 5.9 NRTI (range 2-9), and 1.3 NNRTI (range 0-3) DRM at baseline. Initial background regimens (BR) were limited to NRTIs and enfuvirtide in this study and, in general, the baseline PR and RT mutational profile was maintained with only some minor fluctuations throughout the follow-up period. At the last time point evaluated, 95.2% (101/ 106) PRI and RTI DRM found at baseline were still present, and the EVG-treated patients had an average of 3.3 PI, 5.1 NRTI, and 1.4 NNRTI resistance mutations. One patient developed a new PR mutation during EVG+BR treatment, 3 patients developed new NRTI mutations, and one patient developed a new NNRTI mutation compared to baseline. Considering subjects were not taking NNRTIs or PIs in their regimen, the PI and NNRTI mutations that developed likely reflect the recurrence of archived resistant viruses from previous treatments.
PR and RT Mutations
The RAL-treated patients had similar resistance profiles at failure compared to the EVG-treated patients at failure, with an average of 3.3 PI (range 0-8), 5.0 NRTI (range 0-8), and 2.2 NNRTI (range 0-5) resistance mutations. Figure 2 shows the distribution of INI DRM found in the clones from EVG-treated patients at various time points. Since HIV viral loads in our patient samples were (.3,000 RNA copies/ml, sampling errors during the RT-PCR process were less likely to occur [19,20]. In addition, in our study some samples with low viral loads had several INI-resistant quasi species, while some samples with high viral loads had few INI-resistant quasi species. Overall, our results showed that there was no relationship between the number of INI-resistant quasispecies and viral load (p = 0.15).
Mutations after EVG Treatment
No major INI DRM were found in the baseline samples; however, strains with different INI resistance genotypes (i.e. single or combinations of INI DRM) were identified in the plasma of EVG-treated patients as early as 2 weeks after initiation of treatment. In the 5 patients that were analyzed at 2 weeks post treatment, an average of 1.6 strains carrying different INI resistance genotypes (range 0-4) were detected. The genotypes found at the early time points were primarily single mutations (T66A/K, E92Q, Q148R, and N155H), and several patients had more than one distinct population of single mutation-containing strains. At Week 4, patients (n = 6) had an average of 3.2 (range 2-6) different INI resistance genotypes, and this average was maintained throughout the later time points until week 24 and beyond, where the average number of INI resistant genotypes was 1.8 strains (range 1-4). These later time points showed both the influx of additional single mutation-containing strains, and the appearance of multiple distinct strains carrying 2 or more INI resistance mutations. The last time points studied in most patients showed the emergence of a dominant 2 or 3 mutation-containing strain (e.g. E138K + S147G + Q148R); although in 3 patients the dominant strain carried a single mutation, N155H. The bias of selecting more early treatment time points resulted in the highest frequency of strains containing single INI DRM; however some mutations were never found alone (e.g. E138A/K, G140C/S, and S147G, p,0.001). Several combinations of mutations were prevalent and found in multiple time points from multiple patients. In EVG-treated patients, the most prevalent twomutation combinations on the same genome were G140C/S + Q148H/K/R, E138A/K + Q148H/K/R, S147G + Q148H/K/ R, and E92Q + N155H/S. The most frequently occurring threemutation combination was E138K+S147G+Q148R, found in 112 clones in 6 different time points from 3 patients. In contrast, there were no clones found that contained N155H/S together with either S147G or G140C/S.
Intra-IN Coding Region Linkage
In the more limited analysis of 5 RAL-treated patients, the most frequent single INI DRM-containing strains were N155H, also commonly seen in EVG-treated patients, and Y143C/R, which was not found in the EVG-treated patients. The most common strains containing more than one INI DRM were G140S + Q148H, with or without the additional mutations E138A or Y143C, observed in 3 patients.
IN Mutation Linkage with PR and RT Mutations
The existence of positive or negative association of IN mutations with PR and/or RT mutations was examined in sequences from both EVG and RAL-treated patients. Table 1 shows the 8 most common INI DRM patterns and their associated PI and RTI DRM. Single or combinations of INI DRMwere found on strains carrying 0-6 PI mutations, 1-10 NRTI mutations, and/or 0-3 NNRTI mutations. This observation is consistent with results shown in Figure 2, where multiple different IN mutations evolved on the highly PR/RT mutated strains found at baseline in the EVG-treated patients. These data suggest that development of IN resistance mutations is not restricted by PR and/or RT resistance genotypes in highly ARV-experienced patients carrying multiple PI and RTI DRM.
Analysis of mutational linkage data from PCR-amplified samples can be complicated by PCR-mediated recombination events [30][31][32]. We employed modifications to standard PCR procedures that have been shown to reduce the frequency of recombination (39,44). In addition, we performed recombination analysis experiments to assess the number of these events in our PCR system. Within the IN coding region, we found less than 1 recombination event among the clones tested for each patient time point. While the recombination frequency within the IN coding region is very low, it is possible that some of the rare IN DRM genotypes found in our study (Figure 3) were the result of PCRmediated recombination. The low recombination frequency was similar within the PR/RT coding region analyzed. The frequency was higher when analyzing recombination between the PR/RT and IN coding regions, however, this did not affect our finding of linkage between PR/RT and IN DRM, as 92% of the PR/RT clones had the same PR/RT DRM across all time points and all patients, due to the extensive ARV experience of the patients. Thus in this study, the few recombination events between the PR/ RT and IN regions would not change the PR/RT DRM genotypes associated with the various INI DRM genotypes. However, in studies where patients have less ARV experience and/or PR/RT DRM, recombination may affect linkage results that may benefit from other analysis procedures like single genome sequencing.
The early appearance of multiple INI DRM-containing quasispecies in EVG-treated patients is similar to that seen in patients initially receiving NNRTI treatment [33,34]. In contrast, the early development of PI and NRTI resistance is typically characterized by the emergence of a single DRM-containing quasispecies that is followed by the sequential addition of other DRM to the original genotype [33,[35][36][37][38]. The similarities between INI and NNRTI DRM development, in contrast to that of PI and NRTI, may be related to the fact that INI and NNRTI are more potent drugs, exerting a significantly greater selective pressure on the viral population and evoking a wider range of DRM-containing strains. In addition, low resistance barriers to INI DRM can allow multiple distinct pathways to develop that may have differences in the level of resistance conferred and are, in part, mutually exclusive. Co-existent DRM quasispecies present during early failure subsequently resolve to predominant resistant variants (containing single or multiple INI DRM) that exhibit the best ability to replicate in the presence of ongoing EVG drug pressure.
Previously published data for RT describes the step-wise loss of susceptibility occurring through the evolution of intermediates (i.e., M184I/V) or sequential development of resistance (leading to Q151M), and that multiple RT mutations are likely to reside or co-locate on single quasispecies [39,40]. In addition, studies have described the co-linkage of multiple RT and/or PR DRMs on the same viral strand [41], and the evolution and intermediates with resultant single or limited species containing multiple ARV class DRMs on the same strand with continued ARV exposure to a failing regimen [42]. Our patients were all heavily ARV treated, with multi-class resistance, as evidenced by the numerous background RTI and PI DRMs, which remained remarkably consistent over time, implying that the additional fitness pressure of the INI DRM could be accommodated on the MDR backbone. Specifically, although certain RTI and INI DRM affect replication capacity or viral fitness [12,29] that did not preclude, for example, the development of M184V in RT coding region and N155H or Q148HK/R in IN coding region from co-locating on the same strand and being detectable over time, implying no lethal effect of this DRM co-linkage. RT mutations can affect replication capacity, but in limited studies they have not affected INI susceptibility [43].
Population or current ultradeep sequencing of samples from patients who failed RAL or EVG have not resolved whether prior NRTI/NNRTI/PI DRM-containing quasispecies acquire IN mutations associated with RAL or EVG resistance, or circulate (s) PI NRTI NNRTI M46L, V82C, I84V, L90M M41L, D67N, T69D, K70R, L74V, V75S, M184V, L210W, T215F, K219Q K101Q, Y181C, G190A doi:10.1371/journal.pone.0040514.t001 separately as distinct species, because of the short PCR amplicons analyzed. We employed a population-based RT-PCR method under conditions to reduce PCR-mediated recombination to assess quasi species variability and linkage. While our method does not eliminate the potential for PCR-mediated recombination, the results from our study suggest that this effect is limited. Over long distances (e.g. between PR and IN) recombination is difficult to assess since most samples were homogeneous with respect to PI and RTI DRM. Within the IN coding region, we did not find combinations of INI DRM known to be exclusive (or antagonistic) in RAL-resistant strains in this study or others [11,23,29,44], which would be expected if recombination occurred with high frequency. Although others have reported on the use of long range PCR for evaluation of multi-coding region relationships in HIV [45,46], to our knowledge, ours is the first report of co-localization of these DRMs across 3 distinct pol gene coding regions. For the most part, INI DRMs were added to already complex multi-drug resistant (MDR) species, as the patients that were studied received EVG or RAL only after failing other ARV regimens and typically had dominant viral strains with PI, NRTI, and NNRTI DRMs. Our study has several limitations. First, the study was limited to a small number of patients with significant prior ARV exposure, multiple pre-existing DRMs, and suboptimal response to the EVG-containing regimen. It is not known whether INI DRM evolution or linkage relationships with RT and PR are the same in ARV-naïve patients or patients failing their first or second ARV regimen, or in patients with an initial strong virologic response (e.g. to undetectable) and subsequent failure. We only determined DRM evolution with EVG and not RAL, so the presence and evolution of RAL DRM intermediates, as others have described, using our clonal analysis could be different than those previously reported. In addition, we selected patients with a likelihood, based on preliminary population-based sequencing results, of possessing multiple DRM-containing strains. It is possible that a greater percentage of patients in the overall EVG-treated population, compared to what was surveyed here, develop INI resistance and failure with a limited number of DRM-containing quasispecies (as was seen in some patients in this study). Secondly, we did not generate in vitro drug susceptibility or replication capacity data on the strains in this study, so it was not possible to discern whether changes in mutational profiles are driven by drug selective pressure or simply represent random and/or stochastic fluctuations of variants that are equally capable of replicating in the presence of EVG. We only analyzed patients with HIV-1 clade B virus, and so could not determine whether evolution or linkage is the same in non-clade B strains. Previous studies have indicated that RAL is equally efficacious in non-clade B virus infections and that DRMs that develop after RAL failure are similar to clade B strains [47], although novel DRMs that confer RAL resistance have also been described in circulating recombinant forms (G118R) and non-clade B strains [48,49].
Finally, the number of clones analyzed may be insufficient to determine prevalence of additional very low frequency mutants, as has been described with NGS, even in patients without exposure to INIs. Previous studies have found that low frequency NNRTI mutants can result in higher level of virologic failure [50]. Although low frequency mutants with RAL DRMs have been found prior to RAL therapy, they have not affected virologic outcome in most of those patients studied [18]. We did not find major INI DRMs prior to EVG treatment, although it is possible that EVG mutants existed and lead to early virologic failure. Further analysis using NGS would need to be performed to answer this question. In addition, other quasispecies-probing techniques, such as single-genome sequencing, may yield different proportions of quasispecies at the various time points -our goal was not to accurately quantify the quasispecies populations, but to survey its possible breadth and evolution over time.
In summary, EVG ARV regimen failure demonstrates a dynamic evolution of multiple species during early failure leading to a final DRM associated species. EVG and RAL DRMs were colocated on the same viral strand as RT and PR DRMs. Co-linear genotypic analysis of long-range amplification products supports the utility for whole HIV-1 pol viral sequencing to provide a greater comprehensive resistance profile for use in guiding ARV treatments and prognosis.
Ethics Statement
The Yale University/VA Connecticut Institutional Review Board approved the study and written informed consent was obtained from all subjects studied at Yale. The Stanford University IRB approved the study with a waiver of consent, as some samples for this study were obtained at Stanford after routine clinical laboratory testing with safeguards in place for protection of personal health information. The Western IRB (Olympia, WA) and Chesapeake IRB (Columbia, MD) approved Gilead Study 183-0105, and written informed consent was obtained from all study subjects.
Patients
Ten EVG-treated patients that were enrolled in Gilead 183-0105 [6], a dose-ranging phase 2-study that explored the use of ritonavir-boosted EVG in the absence of PIs with optimized RTI background in heavily treatment-experienced patients, were studied. A convenience sample of patients was selected from this study based on the following criteria: 1) received 125 mg/day of EVG; 2) had cryopreserved plasma from multiple time points; 3) had virologic failure on their EVG-containing regimen, and 4) had evidence of evolving EVG resistance (mixtures) based on preliminary population-based genotypes. In addition, plasma samples from 5 raltegravir-experienced patients were obtained from remnant material from clinical practice; only single, failure time points were available from these patients.
Amplification and Clonal Analysis
RNA was isolated from 500 ml of plasma with Qiagen Viral MinElute Kits (Qiagen, Chatsworth, CA) and amplified by RTnested PCR using conditions previously shown to reduce the frequency of PCR-mediated recombination [46,51]. These conditions include using a mixture of rTth polymerase and the proofreading polymerase VentR, hot start, and long extension times in both the RT and PCR steps. Reverse transcription was performed using Superscript III First Strand Kits with random hexamers according to manufacturer's instructions. The RT conditions were 25uC for 10 min, 45uC for 2 hr, and 85uC for 5 min. The resulting cDNA was amplified for 40 cycles with the GeneAmp XL PCR kit (Applied Biosystems, Foster City, CA), using primers MAW26 (TGG GAA ATG TGG AAA GGA AGG AC) and VIFR5 (GGG ATG TGT ACT TCT GAA CTT), which generates an amplicon from the protease coding region through the integrase coding region. Amplification parameters were 1 cycle of 94uC for 1 min, then 35 cycles of 94uC for 15 sec, 53uC for 15 sec, and 72uC for 10 min, and a final 10 min extension at 72uC. A second round PCR was performed under the same amplification conditions using primers PRO-1 (CAG AGC CAA CAG CCC CAC CA) and MAW24 (TGC TGT CCC TGT AAT AAA CCC GAA AAT). Limiting dilution analysis on a subset of samples used in this study showed that as few as 300 input HIV RNA copies could be successfully amplified using this method. The resulting 2.7 kb amplicons were cloned using TOPO TA cloning kits (Invitrogen, Carlsbad, CA) according to manufacturer's instructions. Minipreps from the resulting clones were prepared using Qiagen Turbo96 Miniprep kits, and the sequences of codons 1-99 of PR, 1-230 of RT, and 1 -219 of IN coding regions were determined using standard dideoxyterminator sequencing (Sequetech, Mountain View, CA) using primers M13F and M13R (for PR and IN) and primer RT20 (CTG CCA GTT CTA GCT CTG CTT C, for RT).
Standard phylogenetic analysis was performed to rule out contamination between patients. Consensus sequences were generated from alignment of the clone sequences using MegAlign (DNAstar, Madison, WI), and mixtures were reported if a minority population was represented in greater than 20% of the clones. Drug resistance mutations for PI, NRTI, NNRTI, and INI in each clone were identified by the Stanford Drug Resistance database [52]. Statistical analyses using inear regression,Chi-Square, and Fisher's exact tests were performed using VassarStats (http:// faculty.vassar.edu/lowry/VassarStats.html).
Recombination Analysis
The frequency of PCR-mediated recombination in this study was determined by mixing equal proportions (1000 copies each) of unique patient-derived plasmid clones. For example, clone 11 from Patient 3 was mixed with clone 1 from patient 8. Eight separate plasmid mixtures were prepared and amplified as described above, except that the first round primers were M13F and M13R. The resulting second round amplicons (using primers PRO-1 and MAW24) were cloned and sequenced as described above. Sequences from approximately 30 clones per mixture were assembled, aligned, and tested for the presence of recombinant clones using the PHI test in the SplitsTree software package (www. splitstree.org) [53]. Alignments were further evaluated for recombination using SimPlot (http://sray.med.som.jhmi.edu/ SCRoftware/simplot/) [54].
Nucleotide Sequences
Sequences of all clones were submitted to Genbank under accession numbers JX198692 -JX202525. | 2016-05-04T20:20:58.661Z | 2012-07-18T00:00:00.000 | {
"year": 2012,
"sha1": "b95aa453dbd45c117ec4f36680aac4a2897207ec",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0040514&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b95aa453dbd45c117ec4f36680aac4a2897207ec",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
238239435 | pes2o/s2orc | v3-fos-license | Atomic-Scale Tuning of the Charge Distribution by Strain Engineering in Oxide Heterostructures
Strain engineering of complex oxide heterostructures has provided routes to explore the influence of the local perturbations to the physical properties of the material. Due to the challenge of disentangling intrinsic and extrinsic effects at oxide interfaces, the combined effects of epitaxial strain and charge transfer mechanisms have been rarely studied. Here, we reveal the local charge distribution in manganite slabs by means of high-resolution electron microscopy and spectroscopy via investigating how the strain locally alters the electronic and magnetic properties of La0.5Sr0.5MnO3–La2CuO4 heterostructures. The charge rearrangement results in two different magnetic phases: an interfacial ferromagnetically reduced layer and an enhanced ferromagnetic metallic region away from the interfaces. Further, the magnitude of the charge redistribution can be controlled via epitaxial strain, which further influences the macroscopic physical properties in a way opposed to strain effects reported on single-phase films. Our work highlights the important role played by epitaxial strain in determining the spatial distribution of microscopic charge and spin interactions in manganites and provides a different perspective for engineering interface properties.
I n complex oxide heterostructures, a controlled modification of the charge-carrier density at the interface can yield a wide variety of phenomena that are absent in bulk materials. 1−3 Many studies in this field have focused on the coupling between manganites and cuprates. 4−7 It has been predicted that charge transfer from a manganite to a cuprate occurs because of the difference between their chemical potentials. 8 X-ray spectroscopy studies of La 2/3 Ca 1/3 MnO 3 / YBa 2 Cu 3 O 7 interfaces have indeed demonstrated a charge transfer of ∼0.2 e − per Cu ion from Mn to Cu, causing a change in orbital occupation and an induced net magnetic moment in the cuprate. 9 In addition, the spatial evolution of the electronic ground state at the interface has been also observed. 10,11 The length scale of the charge transfer, measured by scanning tunneling microscopy, was suggested to be in the subnanometer range, 12 and the spatial broadening of the electronic transition is correlated with the rougher interface. Meanwhile, electron energy-loss spectroscopy measurements revealed an electron enrichment in the manganite layer with a few nanometer thickness near the interface as a result of orbital hybridization and Cu/Mn substitution. 13,14 These observations suggest that disorder effects are an important factor in attempts to understand the spatial correlations in such systems and to obtain precise control of the electronic structure at the interface.
Strain can provide an additional handle to manipulate the interfacial coupling between two materials. An anisotropic hopping between orbitals can be induced by structural changes and cause an orbital ordering. 15 −18 In single-layer manganite thin films, the elongation or compression of MnO 6 octahedra can split the degenerate e g levels, lowering either the 3z 2 −r 2 or the x 2 −y 2 state based on the Jahn−Teller effect. 19 Experimentally, the magnetic ground state of La 0.5 Sr 0.5 MnO 3 (LSMO) is observed to change from an insulating and antiferromagnetic (AF) C-type, to a metallic and ferromagnetic (FM), and finally to an in-plane conducting and AF A-type phase by changing the tetragonality, c/a ratio, from 1.04 (compressive strain) to 0.98 (tensile strain). 20−23 Thus, by varying the strain condition the preferential orbital occupation changes, one can directly modify the electronic and magnetic properties of the material. However, the role of the epitaxial strain for the charge transfer at the interface as well as the interfacial magnetic coupling in cuprate/manganite heterostructures is not yet well understood and explored. A comprehensive picture of the interplay between the lattice degrees of freedom and the electronic structure still calls for a detailed investigation with atomic accuracy.
Here, we provide a systematic nanoscopic investigation of strain and interface effects in La 0.5 Sr 0.5 MnO 3 (LSMO) layer inserted between insulating antiferromagnetic La 2 CuO 4 (LCO) layers grown on three different substrates (LCO/ LSMO/LCO-substrate system) with different lattice spacings. Using scanning transmission electron microscopy (STEM) combined with electron energy-loss spectroscopy (EELS), the detailed chemical composition and the changes of the local Mn valence in the system can be probed at the atomic scale near the interfaces. An asymmetric charge distribution near interfaces within the manganite layers is observed: Hole accumulation near interfaces suppresses the magnetization, giving rise to an exchange-bias effect. Away from the interfaces, the ferromagnetic order is recovered by an electron enrichment. Different from strain effects reported on single-phase films, we find that the charge redistribution in manganite layers is correlated with the interfacial Cu/Mn intermixing as well as the substrate-induced strain, which in turn alters the charge transfer at the interface and the physical properties of the LCO/LSMO/LCO-substrate system.
To confirm the structural quality of the films, we first investigate the LCO/LSMO/LCO trilayer on the LSAT (001) substrate as a representative sample. The low-magnification STEM high-angle annular dark-field (HAADF) image ( Figure 1a) demonstrates a good macroscopic crystal quality with structurally coherent LCO/LSMO and LSMO/LCO interfaces. Similar to prior work, 28 bonding at the bottom interface is followed by an indirect contact at the top interface. In order to explore the elemental distribution, we acquire atomic-resolution 2D elemental maps across the two interfaces. Figure 1d−g displays La (green), Sr (orange), Mn (blue), and Cu (red) maps, respectively. The superimposed overlay (Figure 1h) and the normalized intensity profiles (Figure 1i) of each element show that the bottom LCO/LSMO interface has a stronger intermixing that spreads over ∼1.5 nm, while the top LSMO−LCO interface is abrupt. Away from the interfaces, the La and Sr concentration remains the same. The trilayers grown on STO and LSAO show similar results ( Figures S5, S6) suggesting that the asymmetric cation intermixing at these interfaces is largely independent of the magnitude of the substrate-induced strain and possibly correlates with different stacking sequences and growth kinetics instead. 24,30 Electronic and Magnetic Properties of Trilayers. Next, we turn our attention to transport and magnetic measurements of all three LCO/LSMO/LCO trilayers. The resistance vs temperature (R−T) curves in Figure 2a are normalized to the resistance at 290 K to reveal the differences at low temperatures. The trilayer grown on STO is more semiconductor-like with diverging resistance as T → 0, which agrees well with the expected semiconducting state in halfdoped LSMO. 25,26 However, films grown on LSAT and LSAO show metallic behavior. Meanwhile, noticeable changes in the magnetic interactions for the three samples are also observed (Figure 2b,c). The Curie temperature, T C , as well as the saturation magnetization increases, and accordingly, the resistivity decreases, consistent with the well-known behavior of manganites. 31 The measured T C for the three films are determined to be ∼163, 230, and 247 K on STO, LSAT, and LSAO, respectively. Note that the Neél temperature for the antiferromagnetic LCO cannot be determined due to its weak magnetic signal. The measured magnetism of the films is, therefore, dominated by the LSMO layer. Moreover, the trilayer structure is identical on all three substrates, so the enhanced magnetism should arise from the enhanced doubleexchange contribution to the magnetic interactions. Prior studies on epitaxial LSMO thin films show that compressive epitaxial strain tends to reduce T C and suppress the magnetization in LSMO. 20,21 Thus, the compressive strain of the LCO/LSMO/LCO trilayer on LSAO is expected to weaken the magnetism. However, we observe that the film on LSAO shows the largest magnetic moment, while for STO the magnetization of the film is reduced with a lowered T C . This unexpected behavior suggests that the magnetic and electronic properties of the trilayers cannot be simply ascribed to the induced epitaxial strain. In addition, all samples exhibit nonzero values of the exchange bias at 5 K, consistent with previously reported values. 28 The representative hysteresis loops of the film on LSAT clearly demonstrate the characteristic exchange-bias shift along the magnetic-field axis in Figure 2d. These results suggest that the existence of magnetic frustration near interfaces originates from an exchange coupling of the ferromagnetic layer to the antiferromagnetic interface layer. 32−35 Detailed information about the measured exchange bias and zero-field-cooled magnetization curves for all samples can be found in the Figures S8 and S9.
Probing Charge Variation Across the Interfaces. The exchange interaction between Mn 3+ and Mn 4+ ions in manganites is at root of the correlation between conductivity and ferromagnetism. 36 Herewith, we focus on changes in the local Mn valence by probing the Mn L 2,3 edge fine structures, which reflect the unoccupied local Mn 3d density of states. 37 The evolution of the Mn L 2,3 edge spectra on each atomic layer within LSMO of the trilayer on LSAT is shown (Figure 3). A large width perpendicular to the scanning direction was averaged along the linescan to avoid any beam damage to the film and to increase the signal-to-noise ratio of the linescan, which in turn ensures the accuracy of the valence determination. Spectra on layers 1−6 in Figure 3a were obtained from the bottom interface to the central LSMO layers, while layers 7−12 were scanned starting from the central layers to the top interface through the same scan. The Mn L 2,3 spectra (Figure 3b) show a clear progressive increase of the L 2 intensity from the central layers to both interfaces, as an indication of the valence changes within LSMO. To quantify this effect, the atomic-layer-resolved L 3 /L 2 intensity ratios and corresponding valence states 37 were determined from layers 1−12 and are presented in Figure 3c. The Mn valence profile exhibits an asymmetric shape near the two interfaces. The bottom interface displays a wide region of an increased Mn valence close to 3.6+ over a three-monolayerbroad region, while the top interface displays a more narrow region of approximately one monolayer. The spatial extent of these regions agrees well with the trend observed in the B-site intermixing at both interfaces ( Figure 1i). More importantly, valence changes not only occur near interfaces but also extend to central Mn layers: Away from the interface (layers 5−10), a significantly lower valence state than the expected value 3.5+ is observed. This suggests that the underlying dopant-concentration profiles within LSMO do not play a dominant role in the changes in the Mn valence. Instead, the presence of a charge redistribution occurs in our system.
Strain-Tuned Local Charge Redistribution. The overall trend of the observed asymmetric hole profile within LSMO layers (cf. Figure 3c) is depicted in Figure 4a. To explore the origin of unexpected physical properties that we observe, comprehensive analyses of Mn valence distributions are extended to all trilayers grown on the three substrates in Figure 4b. We estimate the local electronic and magnetic phase present in LSMO by comparing the measured Mn valence with the Mn doping relative to its bulk-like state. We find that for all samples a significantly increased Mn valence near the bottom interface (first to fourth Mn layer) leads to a formal local doping close to the x = 0.6 antiferromagnetic state. This is consistent with previous theoretical model calculations and experimental polarized neutron reflectometry studies showing the lack of carriers leading to magnetic and electronic phase separation 31,38,39 and a reduced FM due to Mn 4+ −Mn 4+ superexchange antiferromagnetic interaction at the cuprate/ manganite interface. 14,28 Away from the interfaces, the magnitude of electron enrichment due to the presence of a lowered Mn valence in LSMO differs significantly for the three substrates. This suggests that the magnetization and conductivity within LSMO are mainly dominated by Mn−Mn double-exchange interactions in the central Mn layers (fifth to tenth Mn layers). The magnetization as well as the conductivity increase as the electron enrichment increases within the central Mn layers. Under compressive strain on LSAO, central Mn layers are close to x = 0.3 for a bulk-like FM phase, which corresponds to the highest ferromagnetic moment and lowest resistivity in the phase diagram. On the other hand, in the case of the tensile strain for the STO substrate, the weakened charge delocalization leads to a reduction of the total magnetization, Curie temperature, and metallicity, compared to the other two films.
Ca-Doping: LCO/LCMO/LCO Trilayer. To confirm the tunability of the charge delocalization and the magnetic phase in trilayers, we also investigated a structure consisting of a 10u.c.-thick La 0.5 Ca 0.5 MnO 3 (LCMO) sandwiched by LCO grown on STO, since the size of the A-site ions in the manganites also influences the stability of the structural phase and may induce chemical pressure. LCMO (a 0 = 3.83 Å) is tensile-strained on STO with a lattice mismatch of δ = −1.96%. Here, the interfacial structure follows a similar sequence compared to the LCO/LSMO/LCO trilayer, with the only difference that less deficiency of the dopant concentration is observed at the bottom interface ( Figures S10, S11). If the Asite intermixing is responsible for the charge redistribution, we expect to observe some differences in the Mn valence at the interface between two trilayers. Nevertheless, an increased Mn valence close to 3.6+ near the bottom interface occurs in both films (Figure 5b), verifying that changes in the Mn valence are related to the B-site rather than the A-site sublattice ions. Moreover, we found a weaker ferromagnetism in the LCO/ LCMO/LCO trilayer (Figure 5a), compared with LCO/ LSMO/LCO on STO. Owing to a stronger chemical pressure induced by the smaller ionic radius of Ca, this allows the lattice to extend the tetragonal phase toward a lower c/a ratio range (Table S1), which decreases the extent of the charge redistribution. As a consequence, a further weakening of magnetism in manganite layers is observed here.
Effect of Strain on Charge Distribution and Magnetism. To elucidate the role of the structure in the charge redistribution and magnetism within the individual LSMO layers, we compare the c/a ratio variation as a function of the Mn valence and Curie temperature. First, the Curie temperature increases as the averaged c/a ratio of LSMO layer is increased (Figure 6a), which is at strong variance with the previously reported suppressed magnetism on single-layered LSMO films by substrate-induced strain. 22,23 This suggests that the mechanism should involve other aspects of the interface besides the Jahn−Teller effect. Second, we find that the average Mn valence decreases with increasing averaged c/a ratio (Figure 6b), indicating a correlation between the amount of transferred electrons from manganites and the lattice strain. A change in the magnitude of the charge redistribution is also observed from the standard deviations of the means. The trilayer on STO shows a Mn valence of 3.55+ averaged over the whole manganite layer, higher than the expected nominal 3.5+ Mn valence. This is consistent with the scenario of the charge transferred from the manganite to the cuprate 8,9 and suggests an intrinsic mechanism due to the interfacial electronic reconstruction. In contrast, under compressive strain, the significantly large spatial variation of the Mn valence in LSMO layer with an averaged Mn valence of ∼3.5+ suggests less transferred charge from manganite and a driving force involving more extrinsic effects, e.g., chemical intermixing at the interface.
CONCLUSIONS
Combining strain and interface effects allows us to establish the link between the structural and electronic reconfiguration at the cuprate-manganite interfaces. Near the interface, the observed hole-accumulation-induced AF exchange coupling can be dominated by a combination of charge transfer (due to band mismatch) and the Cu/Mn intermixing based on the similar length scale with the increased Mn valence. Such charge redistribution in the LSMO layer can be attributed to the electrostatic interaction. It is possible that a substitution of Cu 2+ on the Mn 3.5+ site at the interface as a hole donor attracts the holes (Mn 4+ ) toward the negatively charged interface. Hence, a lowered Mn valence in the central LSMO layer is observed. Another possible scenario, which could realize the observed redistribution, is that due to the size mismatch between Cu 2+ , Mn 3+ (∼0.7 Å), and Mn 4+ (∼0.5 Å), 40 the diffusion of larger Cu 2+ at the interface causes Mn 3+ moving into the central layer to relax the elastic strain energy. 41 Meanwhile, the lattice strain plays an important role in affecting the magnitude of the charge redistribution within manganite layers. The compressive tetragonal distortion produces a lowering of the 3z 2 −r 2 orbitals, leading to a stronger delocalization of electrons in the out-of-plane direction. 19 Therefore, the effect of strain together with the Cu/Mn substitution may result in a larger variation of the Mn valence for the trilayer on LSAO, and an electron enrichment away from the interface, which is presumably responsible for its enhanced FM and metallic behavior. On the other hand, the tensile strain favors the occupation of the x 2 −y 2 orbitals. This leads to confinement of electrons in the in-plane direction and a reduced charge redistribution in the LSMO layer.
In summary, we visualize the strain-tuned charge redistribution by mapping local Mn valence variations in manganite layers. These results emphasize the importance of the interface effect, which here leads to a prominent charge redistribution away from the interface and alters its magnetic and electronic structure drastically. Further, the lattice strain together with the Cu/Mn substitution can modify the charge delocalization at the interface. This finding may provide opportunities to tune the charge transfer at cuprate/manganite interfaces. More broadly, our approach of engineering the spatial extent of the charge redistribution can be applied to achieve a more precise property control at the atomic scale for oxide electronics and related devices.
METHODS
Thin Film Fabrication. LCO/LSMO/LCO trilayers were grown by using an ozone-assisted atomic-layer-by-layer oxide MBE system. The deposition conditions used for synthesizing the samples were a temperature of ∼620°C (pyrometer reading) and a pressure of ∼1 × 10 −5 Torr (of mixed ozone and molecular and atomic oxygen). Each individual growth step was monitored by using in situ reflection highenergy electron diffraction (RHEED). Representative RHEED patterns taken from individual LCO and LSMO layers of the trilayer sample grown on LSAT substrate are presented in Figure S1 as an example. The structural quality of the films was confirmed ex situ by high-resolution X-ray diffraction (see Figure S2).
Electron Microscopy and Spectroscopy. The TEM sample preparation includes mechanical grinding (down to ∼10 μm), tripod wedge polishing (with an angle of ∼1.5°), and double-sided argon-ion milling. For argon-ion thinning, a precision ion polishing system II (PIPS, Model 695) was used at low temperature. Immediately before the experiment, samples were treated in a Fischione plasma cleaner in a 75% argon−25% oxygen mixture. For STEM analysis, a probeaberration-corrected JEOL JEM-ARM200F STEM equipped with a cold field-emission electron source, a probe Cs-corrector (DCOR, CEOS GmbH), a Gatan GIF Quantum ERS spectrometer, and a Gatan K2 direct electron detector was used at 200 kV. STEM imaging and EELS analyses were performed at probe semiconvergence angles of 20 and 28 mrad, resulting in probe sizes of 0.8 and 1.0 Å, respectively. The collection angle range for HAADF imaging was 110−270 mrad. A collection semiangle of 111 mrad was used for EELS investigations. A 0.5 eV/ch dispersion with an effective energy resolution of ∼1 eV was used for overall chemical profiling of the films, and 0.1 eV/ch dispersion with an effective energy resolution of ∼0.5 eV was chosen particularly for the Mn L 2,3 white lines to quantify the Mn L 3 /L 2 intensity ratio. Further details of the data processing and the corresponding Figure S12 are given in Supporting Information.
Electronic and Magnetic Properties. We used SQUID magnetometry to measure the magnetic properties. The magnetization curves were measured using a Magnetic Property Measurement System (MPMS, Quantum Design Co.) in the Vibrating Sample Magnetometer (VSM) mode. Electrical measurements were done in a Van der Pauw (four-point-probe) configuration using alternative DC currents of ±20 μA. The values of resistivity at room temperature | 2021-10-02T06:17:17.891Z | 2021-09-30T00:00:00.000 | {
"year": 2021,
"sha1": "04e304891a00b178bdd0ec6aa74619bef79a039c",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsnano.1c05220",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "36130301e408c9acc9fd6d51f6c4a027cb7d8fe5",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259650000 | pes2o/s2orc | v3-fos-license | Dim artificial light at night alters immediate early gene expression throughout the avian brain
Artificial light at night (ALAN) is a pervasive pollutant that alters physiology and behavior. However, the underlying mechanisms triggering these alterations are unknown, as previous work shows that dim levels of ALAN may have a masking effect, bypassing the central clock. Light stimulates neuronal activity in numerous brain regions which could in turn activate downstream effectors regulating physiological response. In the present study, taking advantage of immediate early gene (IEG) expression as a proxy for neuronal activity, we determined the brain regions activated in response to ALAN. We exposed zebra finches to dim ALAN (1.5 lux) and analyzed 24 regions throughout the brain. We found that the overall expression of two different IEGs, cFos and ZENK, in birds exposed to ALAN were significantly different from birds inactive at night. Additionally, we found that ALAN-exposed birds had significantly different IEG expression from birds inactive at night and active during the day in several brain areas associated with vision, movement, learning and memory, pain processing, and hormone regulation. These results give insight into the mechanistic pathways responding to ALAN that underlie downstream, well-documented behavioral and physiological changes.
Introduction
A continued rise in global urbanization also increases artificial light at night (ALAN), with light pollution now recognized as a disruptive pollutant (Dominoni and Nelson, 2018). ALAN, even at dim levels, disrupts physiological and behavioral processes (Ouyang et al., 2018). However, these changes appear uncoupled from canonical circadian genes, which synchronize behavior and physiology to the natural photoperiod (Spoelstra et al., 2018;Alaasam et al., 2021), but see (Dominoni et al., 2020). Therefore, how dim ALAN affects neuronal activity to disrupt downstream physiological and behavioral processes remains unknown. This knowledge gap hinders our ability to predict and ameliorate responses to light pollution.
As ALAN disrupts hormone regulation, immune function, and nighttime activity, its effect could be linked to many corresponding brain regions (Alaasam et al., 2018;Mishra et al., 2019), especially if central circadian pacemakers are not disrupted. For example, ALAN disrupts melatonin and diurnal corticosterone production (Mishra et al., 2019), which are produced by the adrenal and pineal glands and directly regulate the hypothalamus, septum, and hippocampus (Chabot et al., 1998;El-Sherif et al., 2003;Kus et al., 2013;Zhang et al., 2017). ALAN also disrupts immune gene expression, neuronal survival, and plasticity in the hippocampus and caudal nidopallium (Mishra et al., 2019;Taufique et al., 2019;Namgyal et al., 2020). Lastly, ALAN recruits new neurons to the medial striatum, theorized to replace dying neurons (Moaraf et al., 2021).
Neuronal activity induces immediate early gene (IEG) expression for new protein synthesis (Greenberg et al., 1986;Flavell and Greenberg, 2008). Therefore, IEGs indicate neuronal activation by associating firing with gene expression and have successfully been used to map neuronal pathways (Guzowski et al., 2005;Feenders et al., 2008). IEGs, such as cFos and ZENK, have been shown to respond to different stimuli. cFos expression is stimulated by cAMP and calcium, and ZENK expression by injury, stress, etc. (Morgan and Curran, 1988;Sagar et al., 1988;O'Donovan et al., 1999). Using both IEGs can generate a holistic, detailed map of brain activity for a more representative analysis (Feenders et al., 2008;Nordmann et al., 2020).
We analyzed ALAN's impact on IEG expression throughout the whole brain of Zebra finches (Taeniopygia guttata), an excellent diurnal model organism, as they translate external light similarly to most vertebrates (Durstewitz et al., 1999;Nakane and Yoshimura, 2014). Since ALAN initiates nighttime activity, we predicted activation in the visual and motor pathways, but that these areas would be similar to birds awake during the day. We also predicted, based on previous research, activation in areas involved in learning and memory, particularly the hippocampus, caudal nidopallium, and striatum (Taufique et al., 2019;Moaraf et al., 2021). We found that ALAN significantly altered IEG expression of cFos and ZENK in the hyperpallium, mesopallium, nidopallium, para-hippocampalis, striatum, entopallium, arcopallium, hippocampus, and septum compared to day and/or night birds.
Experimental design
Thirteen male zebra finches (~100 days old) were kept in outdoor aviaries at the University of Nevada, Reno with no previous exposure to ALAN. When they were ~ 140 days old, we moved them to individually housed indoor 47 cm × 31 cm × 36 cm cages and entrained them to 12 h light and 12 h dark (12 L,12D) for 4 weeks. For daylight, we used 1.4-Watt 5,000 K light emitting diode (LED) rated at 95 Lumens lights at 0:00 (zeitgeber time (ZT) 0) and lights off at 12:00 (ZT 12). Birds were given food and water ad libitum. Each cage contained a mechanized perch that relayed hop activity to MATLAB every minute. Cages had individual light-occlusion shades and constant white noise in the background to limit visual and acoustic cues.
We video recorded 30 min of behavior 90 min before perfusion and activity via automated perches (Alaasam et al., 2018). An observer blind to the treatments determined time spent eating/drinking, grooming, hopping, or no movement for the video recordings. We conducted a power analysis based on previously collected behavior data from control and ALAN exposed birds and determined that at least 3 birds were needed per treatment group (Power = 0.8, α = 0.05, effect size = 2, number of groups = 3).
Birds were randomly assigned to one of 3 conditions: control night (12 h light: 12 h dark; 12 L:12D sacrificed at dark night: ZT 14, n = 4), control day (12 L:12D sacrificed at day: ZT 10, n = 4), and experimental ALAN (12 h light: 12 h dim light; 12 L:12Ldim sacrificed at night with artificial light: ZT 14, n = 5). We chose the control day timepoint as close as possible to the night timepoints to be certain in capturing awake birds but also avoiding larger differences in circadian activity. As determined by One-Way ANOVA, groups did not differ in initial mass (p = 0.62). After the 4-week entertainment period, we sacrificed the control night group during the dark period (ZT 14) and the control day group during the light period (ZT 10). We sacrificed individuals in the ALAN group 2 h after the bird's 1 st exposure to ALAN (ZT 14), to obtain peak protein expression and avoid overlap from the light period. ALAN was standardized to around 1.5 lux ±0.01 from a 20 cm × 1.5 cm 5000 K broad spectrum LED strip. This was done with an Extech Easyview Digital Light Meter (model EA13) and lux was calculated using a mean measurement at perch height and two opposing base corners. For a full-spectrum description of the lights, please see (Alaasam et al., 2021).
Immunohistochemistry
We anesthetized birds with 0.1 ml of anesthesia made from 30 mg Ketamine HCl, 105 mg Xylazine, and 8.25 ml saline. After no response to a hard toe pinch, weight was taken, and we perfused birds with 1X PBS for 5 min and 4% paraformaldehyde in 1X PBS (PFA) for 13 min. Brains were removed, left in 4% PFA for 24 h, switched to a 15% sucrose solution for 4-12 h, followed by a 30% sucrose solution overnight, and then flash frozen with powdered dry ice and stored in −80°C until slicing.
We incubated brain slices in a blocking solution (4% BSA, 0.4% triton, 0.05% Na-Azide in 1x PBS) for 3 h and then with primary antibody diluted in blocking solution (c-Fos anti-rabbit polyclonal from ABCAM (ab190289) diluted 1:1000, ZENK anti-mouse monoclonal received from Dr. Keays' lab (Nordmann et al., 2020) diluted 1:300) in 4°C for ~46 h. We washed slices 3 times in 1X PBS for 25 min each at room temp and incubated overnight at 4° C with secondary antibodies (anti-rabbit 488 (from ABCAM ab150081) diluted 1:1000 and anti-mouse 594 (from ABCAM ab150116) diluted 1:1000), protected from light. We then incubated slices with DAPI for 15-25 min at room temperature and washed them in 1X PBS 3 times. Slices were mounted with antifade mounting medium (VECTASHIELD ® ) on slides. We imaged tile scans of full slices within 1 week of mounting on a Leica TCS SP8 confocal microscope.
Statistical analyses
We analyzed images on ImageJ. We determined brain regions using anatomical locations with DAPI staining and a reference atlas from zebrafinchatlas.org. Cells were determined positive for cFos or ZENK if they were three times the mean brightness and overlapped with DAPI. We divided the number of positive IEG cells by the total DAPI cell count to determine expression percentage in representative areas measured over several slices.
Frontiers in Neuroscience 03 frontiersin.org We performed statistical analyses in R, version 4.1.2 (R Development Core Team, 2011). We ran generalized linear mixedeffect models to assess if IEG expression levels were affected by the treatment group as a fixed effect (lme4 package). Slice number and bird ID were included as random effects. We used a Kruskal-Wallis test to analyze the interaction of behaviors and treatment groups. We ran a correlation matrix for all brain regions in each treatment for both cFos and ZENK (Supplementary Figure S1).
Ethics statement
All procedures were conducted in accordance with the National Institute of Health Ethical Use of Animals and approved by the University of Nevada, Reno Institutional Animal Care and Use Committee.
IEG expression
The ALAN group was significantly different from the control night (cFos p = 0.027, ZENK: p = 0.037) but not the control day (cFos: p = 0.17, ZENK: p = 0.66) when all 24 brain regions were analyzed together (Figure 2A). We broke down the analysis by looking into two major pathways-motor and visual-as well as additional areas. There was no significant difference between cFos and ZENK expression between the ALAN group and either control in all combined areas analyzed in the motor pathway (cFos-Day: p = 0.40, cFos-Night: p = 0.14, ZENK-Day: p = 0.72, ZENK-Night: p = 0.14). Similarly, we saw no significant difference for all areas analyzed in the visual pathway (cFos-Day: p = 0.08, cFos-Night: p = 0.07, ZENK-Day: p = 0.61, ZENK-Night: p = 0.11). However, individual areas in both pathways were significantly different (Table 1; Supplementary Figures S2, S3).
To determine if the expression was based on activity, we reanalyzed expression with birds separated into only two groups of active (n = 7) or inactive (n = 6; total minutes of activity <1 min) 90 min before perfusion. Active birds included the control day group and non-active included the control night, with the ALAN group split between the two, based on activity. There was no significant difference in cFos or ZENK expression overall between active and non-active birds (cFos: z = 1.18, p = 0.24, ZENK: z = 1.70, p = 0.09). Additionally, there was no significant difference between active and non-active birds in the whole motor (cFos: z = 1.28, p = 0.20, ZENK: z = 1.81, p = 0.07) or visual (cFos: z = 0.65, p = 0.51, ZENK: z = 1.37, p = 0.17) pathways.
In the visual pathway, the ALAN group showed significantly higher cFos expression in the striatum adjacent to the core of the entopallium, posterior hyperpallium ( Figure 2B), and ventral mesopallium adjacent to the core of the entopallium than the control night group, and significantly higher ZENK expression in the nidopallium adjacent to the core of the entopallium but lower in the core of the entopallium. The ALAN group also showed significantly higher cFos expression than the control day group in the striatum adjacent to the core of the entopallium and posterior hyperpallium areas (Table 1).
In the motor pathway, the ALAN group showed significantly higher cFos expression than the control night group in the anterior mesopallium dorsal ( Figure 2B) and anterior mesopallium ventral regions and significantly higher ZENK expression in the anterior mesopallium dorsal and nidopallium caudolateral regions (Figure 3).
FIGURE 1
Types of behavior 75 to 105 min before perfusion for birds exposed to ALAN and control birds collected during the day and night. A 30-min window of time 90 min before perfusion (75 to 105 min) was analyzed and broken down into four different behaviors: feeding (eating or drinking), grooming, hopping, and inactive. Shown are means ± 1 SE.
Frontiers in Neuroscience 04 frontiersin.org Immediate early gene expression of cFos and ZENK throughout the brain for birds exposed to ALAN, and control birds collected during subjective day and night. (A) Total cFos and ZENK expression, shown in percentages. Expression is significantly higher in the ALAN treatment group compared to the night controls but not the day controls. (B) cFos expression (percentage) comparing birds exposed to ALAN to control day and control night groups in three brain regions: posterior hyperpallium, anterior mesopallium dorsal, and entopallium. (C) ZENK expression (percentage) comparing birds exposed to ALAN to control day and control night groups in three brain regions: hippocampus, medial dorsal mesopallium, and entopallium. Displayed are representative brain regions from a priori hypotheses, please see Supplementary Figures S2, S3 for all brain regions. Shown are means ±1 SE. Significance stars: *p < 0.05, **p < 0.01, ***p < 0.001.
Frontiers in Neuroscience 05 frontiersin.org Brain slices with cFos and ZENK staining in the anterior mesopallium dorsal. (A) A sagittal slice of a representative zebra finch brain 1 mm from the center, showing the anterior mesopallium dorsal. Blue is DAPI, green is cFos, and red is ZENK expression. (B) Images from the anterior mesopallium dorsal of cFos, ZENK, and the overlay of both with DAPI for a bird exposed to ALAN, a bird collected during the day (control day), and a bird collected at night (control night).
The ALAN group also had significantly higher levels of cFos expression compared to the control day group in the anterior mesopallium dorsal and lateral int arcopallium and higher ZENK expression in the nidopallium caudolateral. However, the ALAN group had significantly lower ZENK expression in the anterior striatum, nidopallium adjacent to the basorostral nucleus, and ventral mesopallium adjacent to the basorostral nucleus (Table 1). There was no significant difference between active and non-active birds in the anterior mesopallium dorsal, nidopallium caudolateral, or nidopallium adjacent to the basorostral nucleus.
Frontiers in Neuroscience 07 frontiersin.org The ALAN group also showed higher cFos expression in the area parahippocampalis, medial dorsal mesopallium, entopallium ( Figure 2B), and lateral ventral mesopallium and higher expression of ZENK in the caudal striatum, medial dorsal mesopallium ( Figure 2C), entopallium ( Figure 2C), and lateral ventral mesopallium as compared to the control night group, but lower levels of ZENK expression in the hippocampus ( Figure 2C) and septum. The ALAN group also showed higher levels of cFos expression in the entopallium and higher ZENK expression in the area para-hippocampalis, medial dorsal mesopallium, and entopallium as compared to the control day group (Table 1).
Discussion
Although ALAN is a pervasive pollutant, the neuronal response remains unclear. We imaged IEG expression of 24 brain regions during the day, night, and ALAN exposure in birds and found various regions were significantly differentially activated among the treatment groups. Overall, ALAN-treated birds were more like control-day birds in total IEG expression. However, six brain regions differed among all three treatment groups: anterior mesopallium dorsal, entopallium, medial dorsal mesopallium, posterior hyperpallium, nidopallium caudolateral, and striatum adjacent to the core of the entopallium.
Vision
In the visual pathway, control night birds (LD sacrificed during the night) were significantly different from control day (LD sacrificed during the day) and ALAN birds (LLdim sacrificed during the night). These large differences are to be expected as LD control night birds were inactive. However, we still found two areas had significantly stronger cFos expression for ALAN birds than both control groups: posterior hyperpallium and striatum adjacent to the core of the entopallium. ALAN was a novel visual stimulus for the birds, likely employing a visual neuronal response.
The entopallium, the most prominent area to emerge, was significantly different from both controls and both IEGs. The entopallium is involved in visual pattern recognition (Watanabe et al., 2008(Watanabe et al., , 2011. Surprisingly, we found that birds exposed to ALAN had different IEG expression in visual pathways compared to day controls. We see that even very dim levels of ALAN (around 1.5 lux) elicit a clear response in recognizing this visual input.
Movement
Out of the seven regions of the motor pathway analyzed, ALAN birds were significantly different from the day controls in either cFos or ZENK in six of them. However, when accounting for activity, the ALAN group remained significantly different with increased expression in the anterior mesopallium dorsal and nidopallium caudolateral and decreased in the nidopallium adjacent to the basorostral nucleus. Although the nidopallium caudolateral has additional functions, the anterior mesopallium dorsal and nidopallium adjacent to the basorostral nucleus are differentially activated under ALAN and not associated with hopping. These areas may be picking up movement we did not track, such as head turns and flapping wings, or associated with other functions we are unaware of.
Memory and learning
We found birds exposed to ALAN were significantly different from both controls in areas associated with learning and memory. The ALAN group had significantly higher IEG expression than the day and night controls in the area para-hippocampalis and medial dorsal mesopallium, which are involved in spatial and object recognition and associative learning, respectively, (He et al., 2010;Damphousse et al., 2022). The ALAN birds also had significantly lower IEG expression than the night controls in the hippocampus, which is involved in spatial memory and learning (Bingman et al., 1990;Mayer et al., 2013). Dim ALAN dampens behavioral measures of learning and memory which have also been correlated with structural alterations in the hippocampus (Taufique et al., 2018(Taufique et al., , 2019Liu et al., 2022). Lower nocturnal IEG expression in the hippocampus may partially explain why dim ALAN suppresses gene expression in the hippocampus (Taufique et al., 2018(Taufique et al., , 2019. It is believed that sleeping activates the hippocampus for memory consolidation (Klinzing et al., 2019). Indeed, we see higher IEG expression in our control night birds than day. A nocturnal suppression of hippocampal activity may impair memory consolidation and learning under ALAN.
ALAN treatment birds had significantly higher IEG expression in the nidopallium caudolateral than either of the controls. This aligns with previous research that has found dim ALAN alters the neuroarchitecture of the nidopallium caudolateral, the avian equivalent of the prefrontal cortex (Gunturkun, 2005;Gunturkun and Bugnyar, 2016;Taufique et al., 2019). The nidopallium caudolateral has been implemented in mimicking prefrontal area structures by having the same receptor architecture as the Brodmann Area 10 in humans, which is involved in many processes including reward and conflict, working memory, and pain (Herold et al., 2011;Peng et al., 2018). IEG activation in areas associated with memory support previous findings that ALAN impairs learning and memory (Liu et al., 2022). Additionally, the avian nidopallium caudolateral along with the entopallium have been shown to display attentional mechanisms (Johnston et al., 2017), implying an alert state in our ALAN exposed birds.
Pain processing
Another association to emerge was pain processing. Dim ALAN has been shown to alter pain reception in mice (Bumgarner et al., 2020). ALAN treatment birds had significantly higher activity in the caudal striatum from night controls and significantly higher activity in the nidopallium caudolateral from both controls. Although not much is known about the avian caudal striatum, this area is related to anxiety and pain in mice (Jin et al., 2020). Additionally, the nidopallium caudolateral has been associated with the Brodmann Area 10 in humans, also involved in pain reception (Herold et al., 2011;Peng et al., 2018).
Our results show that ALAN typically increases IEG expression in differentially activated areas compared to both controls. However, reduction of ZENK expression in the septum and hippocampus implies reduced neuronal activation in co-regulated functions-such as hormonal control. This is supported by previous research that ALAN alters hormone production (Ouyang et al., 2018;.
In summary, through fine analyses of IEG expression, we found that initial ALAN exposure activates brain areas involved in vision, movement, learning and memory, pain processing, and hormone regulation, which may be differentially regulated under prolonged sleep loss or long-term exposure to ALAN. Additionally, first time exposure to ALAN at a different time in the night may produce differential responses from those we observed. Although ALAN may not be eliciting changes through circadian regulation, we still see substantial responses across brain areas that warrant further study. ALAN creates a unique brain state that is significantly different from day or nighttime brain activity. Dim light creates a novel environment, different from birds active in the day or sleeping at night, which produced widespread differential brain activity.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: https://datadryad.org/stash/share/ uIyGiK3KEm3Cng_fNAvg9m5pFE65ZJyG4kFOpBO5W3M.
Ethics statement
The animal study was reviewed and approved by National Institute of Health Ethical Use of Animals.
Author contributions
CH, VA, and JO designed the experiments. CH, NC, and AC conducted the experiment and completed data analysis. SP and JO oversaw the project, provided training, and reviewed analyses. JO, NC, and AC provided funding. All authors contributed to and reviewed the writing.
Funding
JO is supported by NIH National Institutes of Health R15 ES030548. NC and AC were supported by the Nevada Undergraduate Research Award and AC by the NEXUS award through the Undergraduate Research Opportunity Program. Research reported in this publication used the Cellular and Molecular Imaging Core facility supported by the National Institute of General Medical Sciences of the National Institutes of Health under grant number P20 GM103650. | 2023-07-11T16:16:16.065Z | 2023-07-04T00:00:00.000 | {
"year": 2023,
"sha1": "a427c36f91abc4edf98314a41c15e1025e8245b4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2023.1194996/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f02fb50c17de58faee4318064bcfb65bb7299426",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239878344 | pes2o/s2orc | v3-fos-license | Solving the Dirac equation in central potential for muonic hydrogen atom with point-like nucleus
Muon has properties very similar to an electron. For this reason, it is possible to replace one of the electrons in an atom by a muon to form a muonic atom. The main purpose of this study is to calculate the energy eigenvalues and to study the probability density of muonic hydrogen with point like nucleus. Numerical results have generated using Matlab software programming language. The reduced mass of muon has been used in order to correct the error incurred by the assumption that the nucleus of muonic hydrogen is point-like which in turn gives it an infinite mass. The energy eigenvalues for different states have been calculated using the rest and reduced masses of muon, and the result have been tabulated. According to these results, the relativistic quantum description is not responsible for the lamb shift. The Probability density shows that muon is much more likely to be found near the nucleus of hydrogen atom for the ground state when compared with the excited states.
Introduction
The muon is an elementary particle similar to the electron, with an electric charge of −1 in the unit of proton charge e and a spin of 1/2, but with the rest mass of ( ) / @ m m 106MeV c 207m . The muon, m, and its associated neutrino m v were first discovered in the decays of charged pions:p m + m -v [1]. A muon is a lepton with properties very similar to an electron. For this reason, it is possible to replace one of the electrons in an atom by a muon to form a muonic atom. However, since the mass of a muon is 207 times larger than that of an electron, the radii of the muonic orbits are much smaller than those of electrons [2]. Consequently, the overlap between the muon orbits and the nucleus is much larger than in ordinary atoms and the energy levels can be significantly perturbed by the nuclear charge distribution [3].
Muonic atoms are observed for the most part by the means of x-ray radiation which they emit; this radiation decays with the half-life characteristics of muon. Since the muons approaches the nucleus very closely than electrons in an electronic atom, they can be used to study the details of nuclear charge distributions, the distribution of the nuclear magnetic moment within the nuclear volume and nuclear quadrupole deformation. The exotic atoms like muonic atoms can be used for the investigation of atomic nucleus [3,4].
Since muon is heavier than the electron, and found in an atomic bound state, it is much closer to the proton than the electron, so proton size effects are greatly magnified. Despite the muon's limited ms 2.2 lifetime, it was anticipated that the larger impact of the proton size on the energy levels would allow a 0.1% measurement of the proton charge radius. The effective potential that the muon experiences is significantly modified by the proton charge distribution. Therefore, a measurement of the 2P-2S Lamb shift could give a precise value for the proton charge radius [5].
The distances from the nucleus. For heavy nuclei, the Bohr radius of a muonic atom is of the same order as the nuclear radius. The muon therefore penetrates the nucleus, having a 90% probability to be inside the nucleus in the ground state. Because of this, the study of muonic atom spectra gives useful information on the structure of nuclei, in particular on the charge (i.e. proton) distribution inside the nuclei [6].
Several studies were conducted on the energy correction of muonic hydrogen [7,8]. But they did not calculate the energy eigenvalue and study the probability density of muonic hydrogen atom. Therefore in this paper we focus on modifying Dirac equation, calculation of the energy eigenvalues of muonic hydrogen with point-like proton for different states and we perform numerical calculations for muonic hydrogen with pointlike proton to interpret the probability density of muonic hydrogen atom. The reduced mass of muon has been used in order to correct the error incurred by the assumption that the nucleus of a muonic hydrogen is point-like which is in turn gives it an infinite mass [9].
Results and discussion
2.1. Minimal coupling to the electromagnetic field Dirac equation in an electromagnetic field with scalar potential A 0 and vector potential A is By using the separation of variables the wave function becomes [10] Where c f and are Pauli spinors. Now we assume that the contribution comes from the vector potential So that the Dirac equation becomes the coupled differential equations of Pauli spinors [9] ⎡ ⎣ ⎤ ⎦ Where Y jlm a two is row spherical spinor and the quantum number l and ¢ l represents the upper and lower components of the Dirac spinors. From equation We neglect the angular wave functions and replace the partial derivative by ordinary derivative since all terms only depend on radial wave function. The radial wave equations become Where ( ) = V r eA 0 which is the central potential for point like proton, k is the generalized quantum number, m and E are the rest mass and total energy of moun respectively.
Exact solution to the coupled radial dirac equation
The coulomb potential is given by [12] ( ) This form of potential leads us to find the well-known quantized energy eigenvalue of the form , the fine structure constant and 1, 2, , the Principal quantum number.
We convert this quantized energy eigenvalue of point like proton into Matlab source code to generate numerical results. By using Matlab software, we can tabulate the energy eigenvalue for different states of muonic hydrogen as shown in table 1
The probability density
The wave function in block matrix is given by The probability density can be The conjugate transpose becomes Then the probability density can be rewritten as But from block matrix given earlier we have Where Y jlm is spin angular function in two component form given by The complex conjugate transpose of the above spin angular functions become
* *
The probability density now written as We interested in the probability density that depends on the radial coordinate. Therefore, integrating overall direction, we have Since the term that carry the angular term is only spherical harmonics Y , l m the integration explicitly written as The orthonormalization condition is given by [14] ( ) Interms of the total angular momentum probability density becomes 2.3.1.The numerical results of probability density for muonic hydrogen atom The probability density in equation (2.20) can be converted into Matlab source code to produce numerical result. We have used MATLAB software to generate the graph of probability density for different states of muonic hydrogen atom. The probability density is obtained up to some normalization constant. The mass used in the calculation is in mega electron volt ( ) MeV , which is the reduced mass of moun =94.96447137 MeV. Figure 1 illustrates the probability density for / s 1 1 2 (red line), / s 2 1 2 (green line) and / s 3 1 2 (blue line) energy levels of muonic hydrogen atom with point like proton. The graph shows that as the radial distance increase the probability density appear to decrease. When we compare with excited state, the ground state moun is much more likely to be found near the nucleus than the excited states. nucleus of hydrogen atom for ground state and less chance to be found near the nucleus when the muon is in excited states.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files). | 2021-10-17T15:14:37.152Z | 2021-10-14T00:00:00.000 | {
"year": 2021,
"sha1": "b0d34ef6d2804d78894d286a87e1d9cfd3e7e3dd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2399-6528/ac2fbc",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7b161209d90de70311897698ec65ca69611c1c0a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
210632330 | pes2o/s2orc | v3-fos-license | Numerical investigation for high lifetime solar-powered airport signal unit development
Autonomous solar-powered small light and signal units can help solve many navigation and transportation problems for sufficient solar energy potential regions. Even in case of centralized grid presence the additional consumers’ integration can be an issue due to construction works and power availability. Electrochemical accumulator is a traditional solution for such devices, but its application usually faces lifetime and stable operation at low temperatures problems. Electrochemical double-layer supercapacitor (EDLC) possess high cyclic resource and stable operation down to minus 40-45°C. Low specific energy capacity, high specific energy costs and high self-discharge rates are main drawbacks for this technology. The paper is devoted to airport signal unit operation numerical simulation in case of lead-acid accumulator and EDLC application. Using developed software based on energy balance calculation and satellite solar energy data, optimal configurations of signal units were derived for Rostov and Saratov climate conditions. Technical and economical estimations for both configurations were made from the 100% operation availability and minimum service during 20 years – typical lifetime for monocrystalline photovoltaic module. Maximum power point tracking (MPPT) charge controller features typical for such application are also considered. Target parameters for EDLC commercially attractive application were evaluated.
Introduction.
It is known, that operation of electrochemical batteries at low temperatures usually causes problems which are more serious than temperature is lower. Russian Federation has a large part of its territory covered with cold climate zone. But in many regions with low temperature transport and infrastructure objects must be operated. Modern transport infrastructure also include airports, especially for regions with high area and signal light units are used to provide stable and safe operation of such objects. Autonomous solar-powered small light and signal units can help solve many navigation and transportation problems for sufficient solar energy potential regions. Even in case of centralized grid presence the additional consumers' integration can be an issue due to construction works and power availability. Such autonomous solar-powered unit include photovoltaic (PV) panel, charge controller and energy storage device to provide night operation of light source usually based on light-emitting diodes (LED) technology. Energy storage device is a most problem component of the whole unitusually lead-acid electrochemical battery is used, causing problems with lifetime, low depth of discharge and low temperature operation. Average lifetime for modern PV panels is about 20 years, for charge controllersabout 10 years, due to electrolytic capacitors drying out and for lead-acid batteryabout 4-5 years. Battery change, considering battery share in capital costs of the light-signal unit as 60-70%, significantly increases payback period for any solar-powered devices. Cold climate operation usually increases PV panel energy output, but drastically worsens battery operation. At high temperatures lead-acid batteries are prone to fastened degradation. Attempts to decrease temperature influence by situating battery underground make the whole system more expensive and complicated. So the task of this research is attempt to find alternate to lead-acid battery storage devices which will be able to provide comparable lifetime with PV panel and charge controllers. In [1] similar motivation led to introduction of vanadium flow battery and supercapacitors into grid-tie PV park for power fluctuations mitigating. Main purpose of this research is to estimate supercapacitors application possibility as energy storage devices for autonomous light-signal solar powered units. Though their specific energy cost, their high lifetime (about 1 mln cycles) and stable operation at low temperatures give them a chance for successful competition with lead-acid battery.
Energy storage cold climate operation
Russia is one of the coldest countries in the world, permafrost occupies about 65% of the entire territory of Russia. The largest areas of permafrost are Eastern Siberia and Transbaikalia [2]. This circumstance causes certain problems when using equipment operating electrochemical energy storage devices. The climate map of the Russian Federation is shown in Figure 1. During electrochemical energy storage device operation an incomplete battery discharge at low temperature is a typical situation. Often the ambient temperature is below 0 ° C and it is necessary to keep the equipment running smoothly. Without an external source of heat or electricity in these conditions it is impossible [3]. In [4], energy output measurements were made for various batteries and supercapacitors at temperatures below zero. Climatic tests have shown that only organic electrolyte-based supercapacitors (curves 1 and 2) retain almost nominal energy capacity, efficiency and power output in a wide temperature range below zero. For other types of tested electrochemical batteries, the residual energy capacity decreases sharply in the zone of low temperatures, as well as with an increase in the discharge current.
Supercapacitor and lead-acid battery unit operation calculation
The purpose of the calculation is to determine the optimal composition of the autonomous power supply system for taxiway lights for airports and compare lead-acid batteries and supercapacitor modules as storage devices for such systems. The most suitable optimization object is a taxiway light with a peak power of 1 W [5]. Due to the low power required a relatively low power consumption, which can be provided by supercapacitors. Since the technical and economic indicators of photovoltaic power objects are highly dependent on the point of their location, pilot facilities are needed for the calculations. The airports of «Tsentralny» (Saratov) and «Yuzhny» (Rostov-on-Don) were chosen as pilot sites for implementation. Both airports are located in regions with relatively high level of insolation.
It is assumed that each taxiway light will be equipped with its own power supply system, including the highly efficient photovoltaic module Hevel HJT 300 (peak power 300 W, efficiency 18%, heterojunction with intrinsic thin layer technology). Also each taxiway light will be equipped with supercapacitor or lead-acid battery with a charge controller and a direct current converter for the supercapacitor battery to the voltage level of 24 V. The charge controller provides the charge of the super-capacitor battery from the photovoltaic module with maximum power point tracking (MPPT), limiting the amperage and voltage of the charge-discharge process of supercapacitor or lead-acid batteries. Also it provides communication with its own control and management system, the inclusion and disconnection of the load. The important issue is that different controllers are needed for lead acid battery (10,5-15 V operational voltage range) and supercapacitor (8-16 V operational voltage range) for better energy utilization. So estimation involves more expensive controller for supercapacitors.
During the calculations, two possible locations of the photovoltaic module were considered. Horizontal, which allows it to be mounted in the runway next to taxiway light. Another option is to place a little bit away from runway, and at an angle of inclination to the horizon, equal to the angle of latitude + 15 degrees (minimum snow-covered of the module in winter).
Special software to estimate solar-powered units operation has been developed in Joint Institute for High Temperatures. Input climate data -daily satellite observations on temperatures and solar radiation sums on a horizontal surface from the NASA Power database [6] from 1987 to 2016, unit location coordinates and UTC time zone. The recalculation of the amounts of solar radiation at given tilt angle of the PV panels was carried out using the approach described in [7]. Input data on key components include PV panel area and peak power, energy storage device efficiency, energy capacity, operation voltage, charge controller peak power and efficiency-on-power dependence. Costs of all components are also used to estimate capital costs for solar-powered light-signal unit. Consumer power demand, its time dependence and demanded power availability are also chosen by user before calculation. Having all this initial data, software is calculating hour temperature, insolation and energy balance of solar powered unit choosing enough components to reach given power availability during the whole calculation period. Energy balance of the whole power unit is calculated for each hour from January, 1987 to December, 2016. Power availability during the whole night throughout this period is given as necessary border condition for calculation. The same availability can be reached at different combinations of PV panels and batteries, so software generates up to 10 such combinations (increasing number of PV panels, usually from 1 to 10 pcs) with their capital costs, giving consumer possibility to chose one with suitable cost and battery average depth of discharge (influencing battery lifetime). Capital costs for each combination are estimated as sum of storage system, PV panels and charge controller costs. Results for such optimization, considering Rostov-on-Don site, are given at figure 2. Because software was initially developed for batteries, energy capacity of supercapacitor module (16 V, 500 F) was estimated as 1,2 Ah, considering 12 V as average operation voltage and 8-16 V as full operation voltage range. Self-discharge rate of 35% per month for supercapacitor and 3% per month for lead-acid battery was also taken into account.
Results and discussion
Calculations were carried out for single solar-powered LED unit with a load power of 1 W, operating the whole day. Calculation results are given in Table 1. Components parameters for calculation were following -300 W peak power for PV panel, 16 V DC voltage and 500 F capacity for supercapacitor battery. 24 V DC is chosen as a DC bus operation voltage to feed LED unit. Close cost value for both sites can be explained by given parameters of PV panel and supercapacitorsoftware is able operate only with integer amount of components, and the set of PV panel and supercapacitor battery is sufficient for load cover at both sites. Results for the same sites with lead-acid batteries are given in Table 2. Calculation showed stable system operation with 50 Ah batteries and much less capital costs than in case of supercapacitors, but energy capacity loss at low temperatures must be taken into account. So real costs for stable-operated system can be estimated at about 50 thousands of rubles. Battery change is supposed to occur once in 5 years, so close payback period for supercapacitors can be achieved only in case of 20 years supercapacitor lifetime and supercapacitor costs needs decreasing, though supercapacitor also allow deeper discharge. Further increase of solar panel amount is undesirable due to unit surface area and capital costs increase. Charge MPPT controller algorythm for such application must take into account high self-discharge rate of supercapacitor batteries, so self-consumption in periods, when load is fed from supercapacitor battery must be minimized. Due to stable supercapacitor operation at wide temperature range temperature compensation for voltage can be neglected. For reasons of higher lifetime electrolytic capacitors application in such controllers must be limited.
Conclusions
Numerical simulation of autonomous solar-powered light signal unit for airports was performed for two locations in Russia using original software for energy balance of solar-based power units and satellite-based initial climate data to compare performance and capital costs for autonomous solarpowered units with supercapacitors or lead-acid batteries as an electric energy storage devices. Calculation included optimal device configuration for both lead-acid and supercapacitor storage system. 300 W peak power PV panel and 50 Ah 24 V lead acid battery (or 250 F 32 V supercapacitor battery) are enough for stable operation of 1 W LED light signal device in Saratov or Rostov-on-Don climate conditions. It was shown that lead-acid battery has cost advantage. But low lifetime and capacity loss at low temperatures give chance for supercapacitors application in case when their lifetime will achieve 20 years, as for monocrystalline PV panel. Such approach can realize more expensive system than in case of lead-acid batteries but it can be operated for 20 years without maintenance and battery change. Possibility to create such unit is shown and cost estimation is made due to high supercapacitors lifetime and stable operation at low temperatures. Special charge controller for such system is neededwith extended operation voltage range and low selfconsumption. | 2019-10-31T09:13:16.888Z | 2019-10-30T00:00:00.000 | {
"year": 2019,
"sha1": "1fe1edd81ab179d52bef2d7a18c93a032863bb8a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/564/1/012137",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ed7d0e01246520019fe2d2c786e870003a42c376",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
73487936 | pes2o/s2orc | v3-fos-license | DksA–RNA polymerase interactions support new origin formation and DNA repair in Escherichia coli
Summary The formation of new replication origins (cSDR) and repair of DNA double‐strand breaks (DSBs) in E. coli share a commonality. We find that the two processes require the RNAP‐associated factor, DksA. However, whereas cSDR also relies on (p)ppGpp, the alarmone molecule is dispensable for the repair of topoisomerase type II (Top II) DNA adducts and associated DSBs. The requirement for DksA in repair of nalidixic acid (Nal)‐induced DSBs or for the formation of new origins is not suppressed by a greA deletion mutation, indicating an active role of DksA rather than competition with GreA for insertion into the RNAP secondary channel. Like dksA mutations, transcription termination factor Rho mutations also confer sensitivity to Nal. The rho and dksA mutations are not epistatic, suggesting they involve different repair pathways. The roles of DksA in DSB repair and cSDR differ; certain DksA and RNAP mutants are able to support the first process, but not the latter. We suggest that new origin formation and DNA repair of protein adducts with DSBs may both involve the removal of RNAP without destruction of the RNA:DNA hybrid.
Introduction
Escherichia coli utilizes at least three different modes of chromosome replication initiation. In addition to DnaAdependent DNA unwinding at oriC , replication can initiate at D-loops and R-loops (reviewed in Kogoma (1997)). R-loops consist of RNA insertions into the DNA double helix, generating a RNA:DNA hybrid and a single-stranded DNA loop. Replication initiation at R-loops (constitutive stable DNA replication; cSDR) occurs in cells lacking ribonuclease HI (RNase HI), which degrades R-loop RNA (Ogawa et al. , 1984;Maduike et al. , 2014). Mutations in RNase HI suppress dnaA ts mutations and enable the growth of oriC mutants (Kogoma and von Meyenburg, 1983). RecA is required in cSDR to either form or stabilize R-loops (Kasahara et al. , 2000). The RNA within the R-loop is elongated by DNA polymerase I (Pol I), and subsequent loading of two diverging replisomes by the replication restart machinery creates a new origin of replication (Kogoma and Maldonado, 1997). Replisome reloading proteins PriA and PriB are indispensable for cSDR (Masai et al. , 1994;Sandler, 2005). cSDR is also dependent on transcription (von Meyenburg et al. , 1987), but the role of the factors associated with RNA polymerase (RNAP) in this reaction has not been studied before. Here we find that DksA, a small RNAP-binding protein, is required for the formation of new origins in cSDR.
The 17.5 kDa DksA protein shares structural similarity with the two anti-backtracking factors, GreA and GreB (Perederina et al. , 2004). DksA is composed of a globular domain with a zinc-binding region, a C-terminal (CT) helix and a coiled-coil domain that inserts into the secondary channel of RNAP (Perederina et al. , 2004;Molodtsov et al. , 2018). Unlike the Gre factors, DksA does not induce intrinsic RNA cleavage activity of RNAP (Perederina et al. , 2004). DksA, together with the β′ subunit of RNAP, forms (p)ppGpp (guanosine penta-and tetraphosphate) binding site 2 (Ross et al. , 2016). (p)ppGpp also binds to site 1, located 60 Å away from site 2, on the interface of the ω and β′ RNAP subunits (Ross et al. , 2013). (p)ppGpp increases the affinity of DksA to RNAP (Molodtsov et al. , 2018). Moreover, the conformation of both DksA and RNAP in the complex changes if (p)ppGpp is present. In the absence of (p)ppGpp, binding of DksA to RNAP bends the β′ rim helix and shifts the βlobe/i4 domain, which is a part of the pincers of the DNA-binding main channel. This shift might weaken the grip on the non-template DNA in the transcription bubble and decrease open complex stability. Upon binding of (p)ppGpp to the RNAP-DksA complex, the RNAP is restored to the original apo-form state. Similarly, DksA conformation reverts to its unbound state in the ternary complex.
DksA, together with (p)ppGpp, participates in the stringent response, a reprogramming of cell metabolism in reaction to environmental stressors, such as nutrient deprivation and heat shock (reviewed in Gaca et al. (2015) and Hauryliuk et al. (2015)). Overall, the stringent response leads to the repression of genes required for rapid growth (such as rRNA and ribosomal protein genes) and the activation of genes involved in amino acid biosynthesis, nutrient acquisition and stress survival. Cellular DksA concentrations are constant under various growth conditions (Paul et al. , 2004;Rutherford et al. , 2007). DksA acts both on its own and together with (p)ppGpp to regulate transcription initiation (Paul et al. , 2004;Paul et al. , 2005) and elongation (Tehranchi et al. , 2010;Furman et al. , 2012). DksA also prevents transcription stalling when translation and transcription are uncoupled (Zhang et al. , 2014) and improves transcription fidelity (Roghanian et al. , 2015;Satory et al. , 2015). For a recent review describing transcriptional responses to DksA and (p)ppGpp see Gourse et al. (2018).
In this study, we focused on Nal-induced damage, which introduces both DNA-protein adducts and double-strand breaks (DSBs). In Gram-negative bacteria, Nal predominantly targets gyrase, a type II topoisomerase.
Gyrase introduces negative supercoils into DNA to relieve torsional stress in front of replisomes and transcribing RNAPs (reviewed in Drlica et al. (2008) and Aldred et al. (2014)). Type II topoisomerases induce staggered DNA nicks 4 bp apart on both strands and bind covalently to the 5′ phosphate of the two strands, allowing a second DNA duplex to pass through the DSB. Nal stabilizes the transient gyrase-DNA cleavage complex, preventing DNA religation. The gyrase adduct and the DSB pose a barrier to replication and transcription, which leads to irreversible chromosome fragmentation and cell death (Malik et al. , 2006). DksA was proposed to enhance the survival after Nal treatment by destabilizing the transcription complexes, thus clearing the way for recombination and DNA repair (Meddows et al. , 2005). Nevertheless, a direct role of DksA in the repair of DSBs or the removal of RNAP has not been shown. Additionally, inactive transcription complexes are removed by Rho helicase, which supports chromosome integrity by suppressing replication fork collisions with stalled RNAPs and subsequent formation of DSBs (Washburn and Gottesman, 2011).
Here we describe an interaction of E. coli DksA with RNAP that creates new replication origins and promotes the repair of Nal-induced DSBs. We analyzed the role of DksA in E. coli MDS42, an MG1655 derivative lacking ~14% of chromosomal DNA, including non-essential genes and horizontally acquired sequences (Pósfai et al. , 2006). We chose this synthetic E. coli strain to ensure that the observed cellular responses in the absence of transcriptional factor DksA do not stem from the presence of cryptic prophages. It was shown previously that rac prophage present in the MG1655 genome renders it more sensitive than MDS42 to bicyclomycin, an antibiotic targeting transcription termination factor Rho (Cardinale et al. , 2008). However, we also tested MG1655 in several experiments.
We find that DksA plays an active role in cSDR and confirm its essentiality in the repair of Nal-induced DNA damage (Meddows et al. , 2005). We assess the roles of other RNAP interacting factors in these DksA-requiring pathways. Importantly, and in contrast to the repair of phleomycin-induced DNA lesions, we show that DksA does not act passively to exclude GreA/B from the RNAP secondary channel (Sivaramakrishnan et al. , 2017). We propose instead that DksA destabilizes transcription elongation complexes during cSDR and DNA repair but leaves the RNA:DNA hybrid to serve as a primer for new DNA synthesis.
DksA is required for cSDR and repair of DNA DSBs bearing protein adducts
In the absence of RNase HI (ΔrnhA ), E. coli cells are capable of replicating using not only the chromosomal origin of replication oriC , but also DnaA-independent oriK sequences, which fire randomly with respect to the cell cycle (von Meyenburg et al. , 1987;Maduike et al. , 2014). oriK sites contain R-loops that are extended by DNA Pol I to form new replication origins. This pathway, named cSDR, enables a strain with a temperaturesensitive DnaA protein to grow at non-permissive temperatures. We introduced the dnaA 46 ts and ΔrnhA mutations into E. coli MDS42 (Fig. 1A). A dnaA 46 ts mutant cannot grow at 42°C, whereas a dnaA 46 ts ΔrnhA grows at the highest dilution tested (Fig. 1A i and iii). Since the proteins that extend an RNA primer with dNTPs and reload a replisome have been studied previously, we decided to focus on the possible role of RNAP in the formation of new origins. We studied several RNAP mutants and factors that interact with RNAP, such as transcription factor DksA, anti-backtracking Gre factors and (p)ppGpp. Fig. 1A shows that the deletion of dksA prevents the growth of dnaA 46 ts ΔrnhA at 42°C (Fig. 1A iv). DksA expressed from a plasmid reversed this phenotype (Fig. 1B iv). A dnaA + ΔrnhA ΔdksA mutant grew at 42°C, supporting a direct role for DksA in cSDR (Fig. 1A vi). In contrast to WT or ΔrnhA strains, we were unable to introduce ΔdksA into a ΔoriC ΔrnhA strain, which replicates only via cSDR (Fig. 1C). The requirement for DksA in cSDR is not specific to MDS42 background, as the deletion of dksA in MG1655 dnaA 46 ts ΔrnhA strain also blocked the growth at 42°C (Fig. 7 v and vii). These data indicate that DksA enables the formation of new E. coli origins.
It has been shown that dksA mutants are sensitive to nalidixic acid (Meddows et al. , 2005). Nal inhibits the bacterial DNA gyrase A subunit, creating a DNA DSB with stable 5′ DNA gyrase adducts. This structure creates a lethal barrier for replication and transcription. At present, the repair of Nal lesions is not fully understood. We hypothesized that the repair of Nal-induced DSBs could require the interaction of DksA with RNAP to create an RNA primer for DNA synthesis, as is the case in cSDR. We decided to compare the requirements for the formation of new origins (cSDR) and DSB repair. We confirmed the sensitivity of ΔdksA mutants to Nal ( Fig. 2A ii and B) and showed complementation by plasmid-encoded DksA (Fig. 2C iv and D).
The 'DksA-blind' RNAP mutant does not support oriCindependent replication or DSB repair
To confirm that the role of DksA in cSDR and DSB repair involves its interaction with RNAP, we tested a 'DksAblind' RNAP mutant, rpoC E677G that does not bind to DksA (Satory et al. , 2013;Ross et al. , 2016). The triple mutant dnaA 46 ts ΔrnhA rpoC E677G was unable to grow at the non-permissive temperature, unlike the parental dnaA 46 ts ΔrnhA ( Fig. 3 viii and vii). Control dnaA + strains displayed a similar colony forming ability at the permissive and non-permissive temperatures . rpoC E677G mutant was also sensitive to Nal ( Fig. 2A iii and B). We conclude that the interaction between DksA and RNAP is required for both cSDR and DNA repair.
(p)ppGpp is required for cSDR but not for the repair of Nal-induced DNA damage DksA often acts in concert with (p)ppGpp to regulate transcription initiation. We asked if DksA requires (p)ppGpp to promote new origin formation. First, we deleted relA , which encodes the major (p)ppGpp synthetase in E. coli . The dnaA 46 ts ΔrnhA ΔrelA strain was able to grow at both low and high temperatures, although colony size at 42°C was decreased relative to the relA + parent (Fig. S1A). Attempts to additionally delete spoT (gene encoding a bifunctional (p)ppGpp synthase/hydrolase) and create a (p)ppGpp 0 MDS42 strain were unsuccessful. In E. coli, (p)ppGpp binds to RNAP and also to other protein targets. To determine if the interaction of (p)ppGpp and RNAP is required for cSDR, we investigated RNAP polymerase mutants defective in (p)ppGpp binding. E. coli RNAP carries two (p)ppGpp binding sites. Site 1 is formed by the ω and β′ subunits, whereas site 2 is formed by DksA and β′. The mutations disrupting site 1 (RNAP 1-) include a deletion of several amino acids from the ω subunit (rpoZ Δ2-5 ) and three point mutations in the β′ subunit (rpoC R362A R417A K615A ) (Ross et al. , 2013). The mutations disrupting site 2 (RNAP 2-) are limited to two substitutions in the β′ subunit (rpoC N680A K681A ) that prevent (p)ppGpp but not DksA binding (Ross et al. , 2016). We introduced the RNAP 1-and 2-mutations into the dnaA 46 ts ΔrnhA strain and tested growth at high temperatures. Mutations in either or both of (p)ppGpp-binding sites did not affect the growth of the dnaA 46 ts strains at 32°C (Fig. 4A i-iv). At the non-permissive temperature, mutations in RNAP binding site 1 reduced colony size, but did not prevent colony formation by dnaA 46 ts ΔrnhA ( Fig. 4A vi). In contrast, site 2 mutations inhibited the growth of dnaA 46 ts ΔrnhA at 42°C approximately 1000-fold ( Fig. 4A vii). Deletion of both (p)ppGpp-binding sites was even more inhibitory on the growth of dnaA 46 ts ΔrnhA at 42°C (Fig. 4A viii).
In contrast, deletion of (p)ppGpp-binding sites had an opposite effect on the sensitivity of strains to Nal. The strains lacking RNAP site 1 grew comparably to the parent, whereas the growth of the RNAP site 2 mutant and the RNAP sites 1 and 2 mutant was significantly improved relative to the wild type ( Fig. 5A and B; Fig. S2A and B). Although increased resistance of RNAP 2-to Nal was true for both MDS42 and MG1655 background (Figs 5 and S2), the effect of mutations in (p)ppGpp-binding sites 1 and 2 in cSDR appears to be different depending on the strain background. In MDS42, site 2 plays a bigger role, whereas in MG1655, site 1 seems more important (Fig. S1B). We do not yet have an explanation for this phenomenon.
To eliminate the possibility that the RNAP mutations themselves, rather than the lack of interaction with (p)ppGpp affect R-loop-initiated replication and DSB repair, we utilized the MG1655 background. Here, we were able to construct a dnaA 46 ts ΔrnhA (p)ppGpp 0 strain lacking both relA and spoT and test it for growth at permissive and non-permissive temperatures. We found that the lack of (p)ppGpp prevented cSDR in MG1655 (Fig. 4B vi) to a larger extent than mutations in the (p)ppGpp-binding sites 1 and 2 (Fig. S1B). However, while RNAP site 2 mutation significantly increased MG1655 resistance to Nal ( Fig. S2A and B), the (p)ppGpp 0 mutant was ~10-fold more sensitive to Nal than the parental strain ( Fig. 5C and D; Fig. S2C). The discrepancy between the (p)ppGpp 0 and RNAP 1-2-phenotypes suggests that the RNAP mutations per se increase resistance to Nal. We confirmed all the (p)ppGpp 0 phenotypes by showing that strains failed to grow on a minimal medium and, therefore, had not accumulated suppressors (Figs S1C and S2D). We conclude that (p)ppGpp plays a significant role in cSDR but not in the repair of Nal-induced DNA damage.
Anti-backtracking factors are not essential for cSDR
The coiled-coil domain of DksA protein inserts itself within the RNAP secondary channel. The anti-backtracking factors GreA and GreB share a similar structure, enter the secondary channel and compete with DksA for binding to RNAP (Vinella et al. , 2012). To examine their potential role in cSDR, we deleted each of the genes from the dnaA 46 ts ΔrnhA strain and assayed the growth at 42°C (Fig. 6 vi-vii). Deletion of greA or greB did not block the growth at non-permissive temperatures, indicating that the lack of one of the factors does not prevent cSDR. A double greA greB deletion mutant is temperature sensitive in E. coli MG1655 but not in MDS42 (Fig. 6 ix). We were able, therefore, to construct and assay an MDS42 dnaA 46 ts ΔrnhA ΔgreA ΔgreB mutant. This strain, which lacks both Gre factors, grows at the non-permissive temperature ( Fig. 6 viii). Moreover, a ΔoriC ΔrnhA ΔgreA ΔgreB mutant was also viable (data not shown). Taken together, the data confirm that the anti-backtracking factors are not required for cSDR.
DksA plays an active role in cSDR and DSB repair
It is possible that GreA/GreB blocks cSDR, and that the role of DksA is to reduce the entry of these factors into the RNAP secondary channel (Sivaramakrishnan et al. , 2017). To address this question, we attempted to delete greA from dnaA 46 ts ΔrnhA ΔdksA . We reasoned that if the increased interaction of GreA with RNAP in the absence of DksA inhibited cSDR, then deleting greA should restore the ability of the strain to replicate in an oriC -independent manner. We were not able to construct the double ΔdksA ΔgreA mutant in MDS42. We could however construct the dnaA 46 ts ΔrnhA ΔdksA ΔgreA mutant in the MG1655 background (Fig. 7). The mutant was unable to form colonies at the non-permissive temperature, indicating that DksA exclusion of GreA/B does not account for the DksA requirement for oriC -independent replication (Fig. 7 viii).
Next, we asked if the deletion of greA rescues the Nal sensitivity of a ΔdksA mutant, as would be predicted if DksA acted passively. dksA mutants are sensitive to the radiomimetic drug, phleomycin (Sivaramakrishnan et al. , 2017) (Fig. S3A). A greA deletion not only improved wildtype growth in phleomycin, but also suppressed the sensitivity of a dksA mutant. This suggested that DksA acts passively to enhance DNA damage repair by excluding GreA from the RNAP secondary channel, thus favoring RNAP backtracking (Sivaramakrishnan et al. , 2017). We therefore tested the Nal sensitivity of a MG1655 ΔdksA ΔgreA mutant. Initially, we tested the susceptibility to Nal as in previous experiments, by serially diluting strains and plating them on LB agar containing defined Nal concentrations. We observed that ΔdksA ΔgreA formed colonies on higher dilutions than ΔdksA alone (Fig. S3B). However, unlike WT or ΔgreA strains, ΔdksA ΔgreA formed single colonies starting from the 10 −1 dilution. This suggested to us that Nal is bacteriostatic for the double mutant and prompted us to use an additional assay to investigate the effect of Nal on ΔdksA ΔgreA. Growth in the presence of the antibiotic was monitored by the absorbance of cultures at OD 600 , as well as by counting the viable cells present in the cultures after 3 and 6 h of incubation. The ΔgreA mutant and the wild-type strain were equally sensitive to Nal and increased in cell mass as well as viability during growth in LB with similar kinetics (Fig. 8). The ΔdksA mutant was very sensitive to Nal showing little increase in culture density and decreasing rapidly in viability with exposure to the inhibitor. After 3 h in Nal, only 3% of the initial ΔdksA culture survived, and by 6 h the viability count was 1% that of the input. The double ΔdksA ΔgreA strain was also sensitive to Nal, showing little increase in OD 600 over the 6 h time period (Fig. 8A). However, Nal was bacteriostatic for the double mutant, rather than bactericidal, indicating that ΔgreA has some protective effect in a ΔdksA mutant (Fig. 8B). These findings indicate that DksA plays an active role in the repair of Nal lesions, rather than the passive one of excluding GreA.
Rho-dependent termination is required for the repair of Nal-induced DNA damage
Inhibition of Rho-dependent transcription termination leads to chromosomal DSBs (Dutta et al. , 2011;Washburn and Gottesman, 2011). This is thought to result from transcription-replication clashes, rather than from failure to repair DSBs. We find that Rho is required to repair DSBs induced by Nal and/or to suppress clashes resulting from such breaks. As shown in Fig. 9, the rho 15 missense mutant is highly sensitive to A. The RNAP site 1 mutation did not affect growth on Nal, whereas the RNAP site 2 mutant was more resistant than the wild type. Nalidixic acid concentration was increased to 4 µg ml -1 . Strains (i-iv): KK04A, KM915, KK05A, KM917. B. Calculated percentage survival of strains on LB + Nal vs. LB alone. Graph shows mean percentage survival with one standard deviation. Statistical analysis was performed using a nonparametric two-tailed Mann-Whitney test, comparing combined data for RNAP 2 + vs. RNAP 2-mutants. **p < 0.01, n = 3. C. (p)ppGpp is not required for the repair of nalidixic acid-induced DNA damage. The Nal concentration was decreased to 2.5 µg ml −1 due to higher sensitivity of MG1655-derived strains. Strains (i-iv): MG1655, KM773, RLG850, RLG847. D. As in (B) but statistical analysis was performed using a nonparametric Kruskal-Wallis test with Dunn's post test, comparing all data sets. *p < 0.05, n = 6, 4, 4, 4, respectively. [Colour figure can be viewed at wileyonlinelibrary.com] Nal ( Fig. 9A iii and C). To determine if Rho and DksA are part of the same repair pathway, we constructed a strain bearing both the ΔdksA and rho 15 mutations. To test for epistasis, we lowered the Nal concentration from 3 to 1.5 µg ml −1 . At this concentration, both the ΔdksA mutant and the wild type grew (Fig. 9A i-ii), but the rho 15 mutant was ~100-fold more sensitive than the parental strain ( Fig. 9A iii and i). The double mutant was more growth-defective than the rho 15 mutant by itself (Fig. 9A iii-iv). We conclude, therefore, that Rho and DksA are involved in different pathways of recovery from Nal-induced DNA damage.
Roles of RNase HI and DksA in Nal-induced DNA damage repair
Inactivation of RNase HI is necessary for new DNA origin formation from the resulting persistent R-loops. On the other hand, R-loops can initiate DNA breaks (Wimberly et al. , 2013). To test if the deletion of rnhA affects Nal sensitivity and if DksA and RNase HI act in the same pathway, we constructed ΔrnhA and ΔdksA ΔrnhA mutants. Abrogation of RNase HI activity exacerbated Nal sensitivity ~100-fold at 3 µg ml −1 of Nal compared to the wild-type strain (Fig. 9B ii and D). At lower Nal concentrations, the growth of the rnhA mutant was similar to that of the wild type. Combined, the ΔrnhA and ΔdksA mutations increased Nal sensitivity more than either mutation alone ( Fig. 9B and D). We propose that both DksA and RNase HI act to prevent or repair Nal-induced DNA damage, but that they participate in separate pathways.
A mutation in the RNAP main channel suppresses Nal sensitivity of the dksA mutant and restores new origin formation
To test the hypothesis that DksA might decrease the stability of RNAP and thus contribute to cSDR and DNA repair, we tested several previously isolated RNAP mutants that allow replication in the absence of accessory replicative helicases Rep and UvrD (Baharoglu et al. , 2010). One such mutation, rpoB D444G efficiently suppressed the Nal sensitivity of the dksA mutant (Fig. 10A iii-iv and B). Additionally, as shown in Fig. 10C, the rpoB D444G substitution was also able to restore cSDR in the dnaA 46 ts ΔrnhA ΔdksA strain (Fig. 10C vii-viii). rpoB D444G was shown not only to bypass the need for accessory replicative helicases required to remove transcribing RNAPs, the major obstacle to replication, but also to improve UV resistance of ruvABC , a Holliday junction resolvase mutant (Baharoglu et al. , 2010). Based on these observations, it was proposed that the rpoB D444G mutation increases the intrinsic instability of the RNAP-DNA complexes, facilitating both the removal of RNAP upon replication-transcription collisions and replication restart (Baharoglu et al. , 2010). Our results are consistent with this model and suggest that destabilization of RNAP is required both for the formation of new origins and for DNA repair.
Separation-of-function dksA mutants reveal distinct roles of DksA in cSDR and in DNA repair
To ask if the roles of DksA in cSDR and DNA repair were identical, we tested several DksA mutations previously described as able to complement a dksA deletion. Fortuitously, two DksA point mutants, R91A and D71N/ D74N (NN), displayed a separation-of-function phenotype. Both were able to complement the sensitivity of ΔdksA to Nal but neither suppressed the temperature sensitivity of dnaA 46 ts ΔrnhA ΔdksA . This phenotype was seen in both MDS42 (Fig. 11) and MG1655 backgrounds (Fig. S4). The DksA R91A mutation lies in the coiled-coil domain; DksA NN carries two substitutions at the tip of the domain. Both DksA mutants are able to bind to RNAP, but are unable to inhibit transcription from the rrnB P1 promoter in vivo or in vitro (Parshin et al. , 2015). When overexpressed from a lac promoter, they can support the growth of ΔdksA on a minimal medium after prolonged incubation (Parshin et al. , 2015). These results suggest that the roles of DksA in cSDR and in the repair of Nal-induced DSBs are not identical.
Discussion
We report here a requirement of the E. coli RNAPassociated protein, DksA, in the formation of new origins of replication (cSDR) in dnaA 46 ts ΔrnhA or ΔoriC ΔrnhA mutants (Fig. 1). We also confirm and extend the observation that dksA mutants are sensitive to DNA damage induced by Nal (Meddows et al. , 2005). The requirement for DksA for both the formation of new origins and the repair of Nal-induced DNA damage was demonstrated using a ΔdksA mutation or rpoC E677G , an RNAP β′ subunit mutant that does not bind to DksA (Satory et al. , 2013;Ross et al. , 2016) (Figs 1-3).
DksA competes for access to the RNAP secondary channel with the anti-backtracking factors GreA and GreB. An interplay between the three proteins within cells is complex, involving not only competition for RNAP, but also mutual control of the expression of their genes. Their effects on RNAP activity are in some instances redundant A. Growth curve in the absence and presence of Nal. Exponential phase cultures were diluted to OD 600 = 0.01 and the absorbance was measured hourly. Strains: MG1655, KM1034, KM773, KM1054. B. Quantification of the increase in viable count at 3 and 6 h compared to the viability at t 0 arbitrarily set to 1 for each replicate. Graph represents mean and standard deviation, n = 3. Standard deviation value higher than the mean value results in negative error bars crossing the x -axis when the y -axis is in a logarithmic scale. and in others competitive (Vinella et al. , 2012). Here, we show that DksA plays an active role both in cSDR and Nal-induced DSB repair, rather than simply preventing the access of anti-backtracking factors to the RNAP secondary channel. Thus, the requirement for DksA is not obviated by a greA deletion (Figs 7 and 8). This is in contrast to the mainly passive role of DksA in the repair of phleomycin-induced DNA damage, which is attributed to the exclusion of GreA and thus to the enhancement of RNAP backtracking (Sivaramakrishnan et al. , 2017). The difference in the type of DNA damage inflicted by Nal versus phleomycin might account for this discrepancy. Phleomycin is a glycopeptide antibiotic that cleaves the DNA in the presence of metal cofactors and O 2 , leaving simple DSBs (Sleigh, 1976). In contrast, Nal-induced DSBs carry 5′ type II topoisomerase adducts. Repair of such adducts in eukaryotic cells is known to involve different repair functions than simple DSBs (Aparicio et al. , 2016).
DksA, together with the RNAP β′ subunit, forms (p)ppGpp-binding site 2, which is responsible for most of the effects of (p)ppGpp on transcription initiation (Ross et al. , 2016). It is conceivable, therefore, that it is not the lack of DksA per se , but the loss of the transcriptional control exerted by (p)ppGpp that is responsible for the inability of ΔdksA and rpoC E677G mutants to carry out cSDR and Nal-induced DSB repair. Indeed, a lack of (p)ppGpp prevented cSDR A. The rho 15 mutant is more sensitive to Nal than ΔdksA, and together the mutations have an additive inhibitory effect. Strains (i-iv): MDS42, KM885, 10598, 12478. B. The ΔrnhA mutant is more resistant than ΔdksA to Nal and they are not epistatic. Strains (i-iv): MDS42, 10562, KM885, KM777. C. Calculated percentage survival of strains on LB + Nal vs. LB alone. Graph shows mean with standard deviation, n = 3, 2, 3, 4 for Nal concentration 1, 1.5, 2 and 3 µg ml −1 , respectively. D. As in (C) n = 3, 3, 3, 6 for Nal concentration 1.5, 2, 2.5 and 3 µg ml −1 , respectively. in the dnaA 46 ts ΔrnhA strain (Fig. 4B). However, the (p)ppGpp 0 strain was only fractionally more sensitive to Nal than the wild type and more resistant than ΔdksA (Fig. 5C). These results suggest that the effect of ΔdksA on cSDR could be (p)ppGpp-dependent. DksA plays an active, (p)ppGpp-independent role in the repair of Nal-induced DNA damage, consisting of DSBs and Top II DNA adducts. This lack of (p)ppGpp involvement is in contrast to the described role of (p)ppGpp in transcription-coupled nucleotide excision repair (TC-NER) (Kamarthapu et al. , 2016). However, the main role of (p)ppGpp in TC-NER is to facilitate RNAP backtracking away from the damage, which allows efficient repair. In the case of Nal-induced DNA damage, backtracking does not significantly enhance repair since the lack of GreA, an anti-backtracking factor, did not rescue the sensitivity of dksA mutant (Fig. 8). A precedent for a (p)ppGpp-independent role of DksA in genome stability exists, since, as previously reported, the suppression of replication-transcription clashes by DksA is likewise independent of (p)ppGpp (Tehranchi et al. , 2010).
An analysis of RNAP (p)ppGpp-binding mutants did not fully clarify the importance of DksA-(p)ppGpp-RNAP interactions for cSDR and DNA repair. Surprisingly, the phenotype of the RNAP mutants was different than the phenotype of cells in the absence (p)ppGpp. Moreover, the two reactions (cSDR and DNA repair) displayed different (p)ppGpp effects. We found that mutations in the RNAP (p)ppGpp-binding site 2 strongly inhibit cSDR in MDS42, but have less of an effect in the MG1655 background (Figs 4 and S1B). On the other hand, mutations in site 1, which is composed of the ω and β′ RNAP subunits, had little effect on MDS42 but inhibited growth in the MG1655 background. At present, we do not have an explanation for this phenotype. In contrast, mutations in site 2 enhanced the repair of DSBs, whereas site 1 mutations did not affect Nal sensitivity (Figs 5 and S2B). As mentioned above, site 2 accounts for most of the (p)ppGpp effects on transcription initiation (Ross et al. , 2016). The discrepancy between the phenotypes of site 2 mutants and ppGpp 0 strain in cSDR and upon exposure to Nal was, therefore, unexpected. RNAP is not the only target of (p)ppGpp; perhaps the interaction of (p)ppGpp with other cellular components could explain the divergent phenotypes. However, the in vivo response of a RNAP sites 1 and 2 double mutant to nutritional shifts and amino acid starvation was equivalent to the (p)ppGpp 0 strain, confirming that RNAP is the major target of (p)ppGpp (Ross et al. , 2016). The opposite effects of RNAP site 2 mutations on the ability to replicate via cSDR and repair Nal-damaged DNA indicate that the two processes are not identical. Although DksA can bind to RNAP site 2 (Ross et al. , 2016), this interaction must be altered compared to the wild-type RNAP.
Interestingly, two mutations in the coiled-coil domain of DksA and the RNAP site 2 mutation displayed similar cSDR and DNA repair phenotypes. Both DksA R91A and DksA NN , when overexpressed, supported the repair of Nal-induced DNA damage in ΔdksA , but did not suppress the temperature sensitivity of the dnaA 46 ts ΔrnhA ΔdksA strain (Fig. 11). The DksA residue R91 is positioned close to the RNAP (p)ppGpp-binding site 2 residues β′ N680 and K681 and most likely forms salt bridges with the phosphate groups of (p)ppGpp (Molodtsov et al. , 2018). The DksA R91A mutant protein binds to RNAP (albeit with reduced affinity) and similarly to β′ N680A K681A strongly inhibits (p)ppGpp-dependent functions (Parshin et al. , 2015;Ross et al. , 2016). Thus, R91 is proposed to contribute to the formation of RNAP (p)ppGpp-binding site 2. However, unlike the RNAP site 2 mutant, the DksA R91A substitution also limited DksA inhibition of transcription in the absence of (p)ppGpp (Ross et al. , 2016). The RNAP site 2 mutant and DksA R91A both supported growth on minimal media after a prolonged incubation (Parshin et al. , 2015;Ross et al. , 2016). The DksA R91 residue interaction with the β′ rim helices may stabilize DksA in the secondary channel and aid in the positioning of the tip of the DksA coiled-coil domain within the active center of RNAP (Parshin et al. , 2015). The DksA NN mutant with D71N D74N substitutions at the tip of the coiled-coil domain had similar phenotypes to DksA R91A , enabling DNA repair but not cSDR (Fig. 11). Residue D74 is very well conserved and was previously shown to be required for DksA function alone and together with (p)ppGpp at RNAP site 2 (Parshin et al. , 2015;Ross et al. , 2016). Residue D74 interacts with the substrate-binding region of the RNAP active site and is essential for DksA activity (Parshin et al. , 2015). Taken together, these data suggest that the correct positioning of the DksA coiled-coil tip in the RNAP active center is not required for the repair of Nal-induced DNA damage but is critical for cSDR. Similarly, DksA NN suppresses transcriptional pausing and transcription-replication conflicts even though it cannot regulate transcription initiation (Tehranchi et al. , 2010). This further supports the notion that the requirement for DksA in repair of Nal-induced DNA damage involves its role in transcription elongation rather than transcription initiation.
DksA was dispensable for both DNA repair and cSDR in an RNAP mutant with a rpoB D444G substitution. D444 is located in a linker joining the βlobe/i4 domain and the main body of the β subunit (Fig. 10D), and could stabilize the transcription elongation complex (TEC). Several lines of evidence suggest that the rpoB D444G mutation destabilizes RNAP-DNA complexes during transcription initiation and/or elongation. The rpoB D444G mutation allows cells lacking accessory replicative helicases to overcome rich media synthetic lethality, enables their growth in the presence of an inverted rrn operon and facilitates replication restart (Baharoglu et al. , 2010). Similarly, the rpoB D444G mutation was also shown to enhance UV survival of ruvABC mutants, which are unable to resolve Holliday junctions, the last step of homologous recombination (Baharoglu et al. , 2010). It has been proposed that mutations that destabilize RNAP-DNA complexes facilitate the repair and the removal of obstacles that might otherwise block replication and create the need for RuvABC proteins to promote restart (Trautinger and Lloyd, 2002). In our study, the rpoB D444G mutation rescued the ability of dksA mutants to replicate via cSDR and repair Nal-induced DNA damage (Fig. 10), which we also attribute to the decreased stability of TECs. A recent report demonstrated that DksA binding to RNAP in the absence of (p)ppGpp distorts both structures as compared to their apo-forms or when bound in a ternary complex with (p) ppGpp (Molodtsov et al. , 2018). In the binary complex, the CT-helix of DksA rotates the βlobe/i4 domain. The β D444G substitution could, therefore, increase the flexibility of the βlobe/i4 domain, distorting the RNAP pincers, thus phenocopying DksA bound without (p)ppGpp in the secondary binding channel. We speculate that the destabilization of RNAP is required for both cSDR and Nal-induced DNA repair.
Although no evidence for DksA destabilization of the TEC in vitro has been described (Roghanian et al. , 2015;Kamarthapu et al. , 2016), it is not ruled out that DksA might promote transcription termination in vivo . Indeed, DksA reduces transcription-replication clashes in vivo , implying that the protein acts on elongating RNAP (Tehranchi et al. , 2010). Note that we find that Rho, the transcription termination factor, is essential for recovery from Nal-induced DNA damage (Fig. 9). Rho maintains genome stability by preventing replisome-TEC clashes that otherwise would induce replication fork arrest and DSBs (Washburn and Gottesman, 2011). rho and dksA mutations are not epistatic, suggesting that they affect different repair pathways, possibly interacting with different states of elongating RNAP.
cSDR and DNA repair presumably share the requirement for the removal of RNAP. For cSDR to occur, RNAP has to be removed to allow DNA Pol I access to the RNA primer. Rho factor removes both the RNAP and RNA:DNA hybrid and thus cannot support cSDR. We suggest that DksA might destabilize the elongating RNAP without unwinding the RNA:DNA hybrid. This notion requires that the 9-10 bp RNA:DNA hybrid in the TEC be sufficiently stable to persist after RNAP removal. Hybrids of this length have been purified (A. Mustaev, personal communication). Furthermore, in vitro construction of a TEC involves the addition of RNAP to an RNA:ssDNA hybrid. The hybrid is then further stabilized by the addition of the complementary DNA strand (Komissarova et al. , 2003). In cSDR, the RNA:DNA hybrid might be stabilized by RecA-dependent formation of an R-loop that would incorporate the 5′ end of the nascent transcript.
In the case of DNA repair, destabilization of the TEC by DksA could expose the DNA to allow the recombination and assembly of replication forks, as previously suggested (Meddows et al. , 2005). If DksA could remove RNAP without disturbing the RNA:DNA hybrid (and possibly the R-loop upstream), DNA synthesis extending the RNA primer would allow the assembly of replication forks in a manner similar to cSDR. In vitro experiments supporting this notion have been reported. Thus, the E. coli replisome can use an RNA transcript as a primer to continue leading-strand synthesis after a collision that displaces RNAP from the DNA template (Pomerantz and O'Donnell, 2008). Future experiments with reconstituted replication-transcription systems in vitro will be necessary to establish the precise role of DksA in cSDR and DNA repair.
Bacterial strains
All bacterial strains and plasmids used in this study are listed in Supplementary Tables 1 and 2. The strains used in supplementary figures and strains used for construction are in Supplementary Tables 3 and 4. Viability assays E. coli strains were grown for 18 h at 37°C with shaking in LB broth. The cultures were then serially diluted 10-fold in M9 salts. Five-microliter aliquots were spotted on LB agar plates and incubated at 32°C and 42°C to assess the replication of dnaA 46 ts strains via cSDR. To test nalidixic acid (Nal) sensitivity, 5 µl aliquots of 10-fold dilutions were spotted on LB agar plates with and without Nal at a specified concentration. When required, 34 µg ml -1 of chloramphenicol or 100 µg ml -1 of ampicillin was added to the medium for plasmid maintenance. 1 mM IPTG was added to induce gene overexpression, where indicated. Nal sensitivity is presented as the percentage survival on LB + Nal vs. LB. All data points are shown on the graph with the mean marked in red and the standard deviation in black. Statistical analysis was performed using Kruskal-Wallis test with Dunn's post test, comparing all the data sets. Alternatively, two sets of data were compared using the Mann-Whitney test. All experiments were performed at least twice; representative data sets are shown.
Growth in the presence of Nal
Strains were grown overnight, diluted 100 µl into 5 ml LB in a 50 ml tube and grown at 37°C until cultures reached approx. 10 8 cfu ml -1 , which corresponds to OD 600 ~ 0.3-0.5, depending on the strain. Cultures were then diluted to OD 600 = 0.01 in 10 ml of LB and split into two 50 ml tubes; one tube was treated with Nal to a final concentration of 3 µg ml −1 . The cultures were then incubated, shaking, for 6 h at 37°C. Growth was monitored by measuring absorbance hourly. The viability of the cultures at 0, 3 and 6 h was assessed by serially diluting and spotting on LB plates in triplicates and calculating the cfu ml −1 after overnight incubation. The viability of each culture at t 0 was arbitrarily set to 1 and the viability at t 3 and t 6 was normalized and presented graphically. | 2019-03-11T17:17:33.341Z | 2019-03-22T00:00:00.000 | {
"year": 2019,
"sha1": "c2311d1f5595dc9d139b84494fb170379ee032dd",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mmi.14227",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c2311d1f5595dc9d139b84494fb170379ee032dd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
231729497 | pes2o/s2orc | v3-fos-license | Assessing the impact of COVID-19 on the health of geriatric patients: The European GeroCovid Observational Study
Background Despite the growing evidence on COVID-19, there are still many gaps in the understanding of this disease, especially in individuals in advanced age. We describe the study protocol of GeroCovid Observational, a multi-purpose, multi-setting and multicenter initiative that aims at investigating: risk factors, clinical presentation and outcomes of individuals affected by COVID-19 in acute and residential care settings; best strategies to prevent infection in long-term care facilities; and, impact of the pandemic on neuropsychologic, functional and physical health, and on medical management in outpatients and home care patients at risk of COVID-19, with a special focus on individuals with dementia. Methods GeroCovid involves individuals aged ≥60 years, at risk of or affected by COVID-19, prospectively or retrospectively observed since March 1st, 2020. Data are collected in multiple investigational sites across Italy, Spain and Norway, and recorded in a de-identified clinical e-Registry. A common framework was adapted to different care settings: acute wards, long-term care facilities, geriatric outpatient and home care, and outpatient memory clinics. Results At September 16th, 2020, 66 investigational sites obtained their Ethical Committee approval and 1618 cases (mean age 80.6 [SD=9.0] years; 45% men) have been recorded in the e-Registry. The average inclusion rate since the study start on April 25th, 2020, is 11.2 patients/day. New cases enrollment will ended on December 31st , 2020, and the clinical follow-up will end on June 30th, 2021. Conclusion GeroCovid will explore relevant aspects of COVID-19 in adults aged ≥60 years with high-quality and comprehensive data, which will help to optimize COVID-19 prevention and management, with practical implications for ongoing and possible future pandemics. Trial registration NCT04379440 (clinicaltrial.gov).
Multimorbidity and frailty, whose prevalence increases with age [5,6], were supposed to partly explain the worse COVID-19 prognosis in advanced age [2,[7][8][9]. "Inflammaging", which is characterized by a complex, abnormal pattern of immune alterations promoting inflammation over a worthwhile immunologic response, is another possible cause of the age-related increased vulnerability to COVID-19 [10,11]. The cytokine storm, an ominous hallmark of the response to COVID-19, may also be more likely present and exacerbated in older people [10,12]. Apart from pathophysiologic aspects, it cannot be excluded that shortage of resources sometimes prompted to prioritize young and adult patients care, relatively limiting the access of older individuals to more intensive settings of care. In addition, it is possible that atypical presentations of COVID-19, deferring recognition and appropriate care, may have further complicated the clinical management of older patients [13].
It is also important to recognize that a considerable fraction of the excess mortality in older people has been related to poor logistic plans and organization of long-term care facilities, which made the residents at very high risk of infection and inadequate care [14,15]. Insufficient attention to appropriate disinfection practices and the shortage of protective equipment for the staff further contributed to make the nursing home an "at risk" setting for older individuals [16].
Finally, the COVID-19 pandemic affected the quality of care also of non-COVID-19 geriatric patients, especially those with dementia and psychiatric disorders, who were more likely to be burdened by the adverse effects of the loss of social contacts and continuous medical monitoring [17,18]. Even if difficult to quantify, the influence of pandemic on older multimorbid non-COVID-19 individuals determined relevant consequences, as shown by the dramatic increase of mortality after acute myocardial infarction and by the sharp worsening of psychiatric disorders observed after the beginning of the lockdown phase [19,20].
The Italian Society of Gerontology and Geriatrics, in collaboration with the Norwegian Geriatric Society, planned a multi-setting, multinational and multi-scope registry, the Geriatric Population COVID-19 (GeroCovid) Observational Study, aimed at pursuing the following main objectives: -to assess age-related changes in risk profile, clinical presentation, needs of care, and short-and medium-term outcomes of COVID-19 patients aged ≥60 years, in acute and residential settings; -to explore the impact of the pandemic on the functional ability, cognitive, psychological and behavioral status of non-COVID-19 individuals aged ≥60 years, with special attention to patients with dementia; -to identify the adaptive strategies used by outpatient and home care services to compensate for the limitation of contacts imposed by the COVID-19, with a focus on telemonitoring; -to investigate the healthcare measures taken in long-term care facilities to prevent and contrast the COVID-19 pandemic.
This article describes the GeroCovid study protocol and provides a comprehensive overview of the methods, metrics and expected results of the project.
Study design and coordination
GeroCovid is an observational retrospective-prospective study involving adults aged ≥60 years, evaluated during the COVID-19 pandemic. The choice of this age limit will allow us to explore the study outcomes both in adults in working-age approaching retirement and those in more advanced age, capturing possible differences in risk factors, clinical course, prognosis, and burden of COVID-19. Moreover, such cut-off is currently used in several demographic statistics that report health-related data in the general population divided by 10-year age groups [21].
The study has been promoted by the Italian Society of Gerontology and Geriatrics and involves multiple investigational sites across Italy and Norway. GeroCovid includes cases observed since March 1 st , 2020. Enrollment ended on December 31 st , 2020 and the clinical follow-up of prospective cases will end on June 30 th , 2021. The study is registered in clinicaltrial.gov (Trial Registration: NCT04379440).
GeroCovid was designed by the Italian Society of Gerontology and Geriatrics, in collaboration with Bluecompanion, which has developed and adapted a dedicated electronic registry to collect and harmonize clinical data from the different care settings. Based on the setting, the study has been structured into six main research cohorts: GeroCovid acute wards, GeroCovid home and outpatients' care, GeroCovid dementiadrug monitoring, GeroCovid dementiapsychological health, GeroCovid long-term care facilities (LTCFs), and GeroCovid outcomes. The GeroCovid coordinating group involves clinicians specialized in geriatric medicine, experts in epidemiology, and ICT experts. This multidisciplinary group includes the coordinators of the different project cohorts. Under the supervision of the GeroCovid Principal Investigator (RAI) and the support of the methodological team (SDS, GZ), the cohorts' coordinators closely monitor the research activities of the involved sites, including the enrollment of the study participants and the data collection. Since March 2020, the coordinating group is meeting through weekly videoconferences to discuss together the development and optimization of GeroCovid research activities, and to find solutions to possible concerns related to the study.
Study setting and objectives
GeroCovid is a multicentre and multi-setting study whose primary endpoint is represented by changes in health status, defined according to the World Health Organization (WHO) classification [22] and based on the incidence of hospitalizations, Serious Adverse Events (SAE) and death. The settings and the specific primary and secondary objectives for each GeroCovid cohort are reported in Table 1.
Study population
The GeroCovid study is consecutively enrolling individuals aged ≥60 years with or at risk of Covid-19, either retrospectively or prospectively observed. The "risk" of getting SARS-CoV-2 infection or of experiencing the negative effects of the pandemic, was specified in each setting. Of the six GeroCovid cohorts, two consider only Covid-19 patients during the hospital stay (GeroCovid acute wards) and after hospital discharge (GeroCovid outcomes). The GeroCovid LTCFs cohort involves both residents affected by Covid-19 and those at risk of getting SARS-CoV-2 infection according to suspected symptoms or contacts with Covid-19 confirmed cases. The risk of experiencing the negative effects of the pandemic was applied to the GeroCovid home and outpatients' care, GeroCovid dementiadrug monitoring, and GeroCovid dementiapsychological health cohorts, which include, respectively, homedwelling patients on geriatric home care and outpatients with dementia referred to memory clinic. Further details on the inclusion criteria for each GeroCovid cohort are reported in Table 1. Instead, exclusion criteria are: lack of signed informed consent to participate in the study; and, in case of impossibility to inform the patient due to her/his state of consciousness and/or awareness of disease condition, lack of a signed declaration by the investigator attesting that no explicit opt-out advanced directives by the subject existed at the inclusion in the registry.
Data collection
Trained physicians with expertise in geriatric medicine are collecting GeroCovid participants' data in a specifically designed e-Registry, accessible through a dedicated online platform (details on the e-Registry can be found below, under "Data management and quality assurance"). Data collection in the GeroCovid e-Registry is organized in five main sections, to describe both the healthcare infrastructures (specialized COVID-19 hospitals, long-term care facilities, etc.) and the features of single cases observed across different care settings.
Characteristics of the participating centre. In the first step, the local coordinator of each site registers and describes his/her centre in the online platform. In this phase, information is given about the type (e.g., acute/post-acute ward for COVID-19 patients, outpatient clinic, memory clinic, long-term care facility) and the characteristics (e.g., number of beds, type and number of healthcare personnel, etc.) of the structure, the date in which the observation period started, the preventive measures implemented during the COVID-19 pandemics (e.g., adoption of strict limitations to family visits to patients, reduction of non-urgent specialist consultations, body temperature monitoring, isolation of suspected and confirmed COVID-19 cases).
Anamnestic information. For each GeroCovid participant, investigators collect information on his/her demographic characteristics, household setting, pre-Covid-19 lifestyle (smoking and drinking habits, mobility function and physical activity level), chronic diseases, nutritional status, frailty (using adapted criteria from Pedone et al., 2016 [23], and Fried et al., 2001 [5]), regularly used pharmacological treatments (coded through the ATC classification), ongoing or previous (until less than three months) hormonal replacement therapy, and previous influenza, anti-pneumococcal and anti-herpes zoster vaccinations. The main patient's diagnosis (and the initial hospital admission diagnosis, only for the GeroCovid acute wards patients who were originally hospitalized due to non-COVID-19 diseases), observation start and end dates, and type of outcome (classified as no major change, clinical improvement, serious adverse event, death, transfer to a different hospital, withdrawal) are also collected. Finally, recorded data also concern the impact of COVID-19 pandemics and physical distancing on social interactions, assistance in daily activities, care provision, psychological reaction and changes in care setting. Observational phases. For each participant, investigators fill two or more observation modules corresponding to different disease-or evaluation-phases. Each observational module includes the following information: evaluation date and patient health status according to WHO classification [22], vital signs (blood pressure and heart rate), physical examination (general conditions and system-specific evaluation), anthropometry and nutritional status, diagnosis tests for SARS-CoV-2 infection (date and result of nasopharyngeal swab and/or serological tests), arterial blood gas test results, main findings at x-ray and/or computerized tomography scans, blood/urine analyses (including hematology, lipids, biochemistry, inflammatory and cardiac biomarkers, coagulation, thyroid hormones, hepatitis C screening, urine analysis), electrocardiographic test results, COVID-19-like symptoms and date of onset, procedures related to suspected COVID-19 clinical pattern, and updates of the pharmacological therapy.
Adverse events. Onset date, type (coded according to MedDRA classification), severity and outcomes of possible adverse event occurred during the observation period are recorded for each participant.
Specific evaluation scales. Specific information and evaluation scales for each GeroCovid cohort are also collected in the e-Registry. Details in this regard are reported in Table 2.
Data management and quality assurance
The data collection of GeroCovid is performed in a European deidentified clinical data electronic registry. The GeroCovid e-Registry was adapted from an existing electronic platform that Bluecompanion developed in 2018 for a project called e-Trajectories. In March 2020, in conjunction with the COVID-19 pandemic, Bluecompanion made their health data collection system available to the GeroCovid initiative. E-Trajectories and its Gerocovid adaptation are based on the CleanWeb engine produced by Telemedicine Technologies (Boulogne-Billancourt, France), embedded in a dedicated web platform designed for integrating data from different sources. All data are recorded on web servers located in the European Union. ICT operations are compliant with the European General Data Protection Regulation (GDPR) and with the relevant international standards for clinical trials (ISO 9001 certification and FDA CFR 21 part 11). The platform has been developed thanks to the cooperation of the technical-scientific team of Bluecompanion and the Ger-oCovid cohorts' coordinators with the goal of capturing the complexity of the geriatric patient. Moreover, to support appropriate use of the platform, investigators of each investigational sites underwent specific training sessions and dummy data entry on a "training" environment before being allowed to data entry into the production environment.
Statistical analysis
Sample size. For each GeroCovid cohort, the sample size was estimated through formal computation or following a purposive sampling strategy, depending on the specific primary outcomes and on the availability of literature data to suppose the expected effect size. Details for the estimated sample size in each GeroCovid cohort are summarized in Table 2.
Data analysis. Continuous variables will be described as mean ± standard deviation or median and interquartile range in the case of a non-normal distribution. Categorical variables will be reported as frequency values and percentages. Quantitative variables will be compared with Student's t-test and analysis of variance, or the related nonparametric tests (Mann-Whitney or Kruskal-Wallis test) after having shown a non-normal distribution. The Chi-squared or Fisher's test will be used for the categorical variables. Based on the study hypothesis, the association between exposures and outcome of interest will be tested, as appropriate, using linear or logistic regression models, Kaplan-Meier analysis or Cox regressions, and linear mixed models in the case of repeated measures over time. Possible differences by age class (60-64; 65-74, 75-84, ≥85 years) and sex in the study outcomes will be evaluated through interaction tests and stratified analyses.
Ethical aspects
The Gerocovid Observational study overarching protocol was reviewed and approved by the Campus Bio-Medico University Ethical Committee in April 2020. All participating investigational sites further submitted relevant sub-protocols to their competent local Ethical Committee and institutional review boards, as applicable, according to the Italian legislation. All investigators and the ICT team accepted to work according to the GCP (ICH E6-R2). Written or dematerialized informed consent was obtained by each patient. Alternatively, a written declaration was kept on file by the local investigator, which responded to applicable derogations during the pandemic.
All individual clinical data were anonymized before data entry. Collected data are protected and stored on private cloud hosted in an ISO 27001 DataCenter located in the European Union and cannot be lawfully accessed or read by unauthorized users. All recorded clinical data are intended for medical and scientific use for the benefit of the patients, the general and the scientific community and health authorities.
Results
The GeroCovid data collection started on April 25 th , 2020. As of September 16 th , 2020, 66 investigational sites have obtained their local Ethical Committee and institutional board approval, while 24 sites are waiting for final approval. A total of 1618 observed cases (mean age 80.6±9.0 years; 45% men) to date have been recorded in the e-Registry.
The age distribution of male and female participants recorded in the GeroCovid e-Registry is illustrated in Figure 1. Of the total cases, 883 (55%) individuals were assessed in the hospital setting (GeroCovid acute wards and outcomes cohorts), 229 (14%) in long-term care facilities (GeroCovid LTCFs cohort), and 506 (31%) in outpatient or home care services (GeroCovid home and outpatients' care, GeroCovid dementiadrug monitoring, and GeroCovid dementiapsychological health) ( Figure 2). The current average inclusion rate is 11.2 patients/day, and the expected final sample should include more than 2000 observed cases. Preliminary results describing the finally included population will be available in November 2020, while results over the complete study duration will be available in January 2021.
Discussion
The GeroCovid study will contribute to increase knowledge on COVID-19 effects on individuals aged ≥60 years, the most vulnerable to the disease, by providing concrete and useful information to face the ongoing and the future pandemics. The strength of the project is the involvement of multiple centers and settings of care, which will allow exploring the multifaceted impact of COVID-19 pandemic on health status in representative subsets of the geriatric (and pre-geriatric) population. In particular, using high-quality data, the GeroCovid framework will investigate risk factors, clinical presentation and outcomes in COVID-19 inpatients; best strategies to prevent infection in long-term care facilities; impact of Covid-19 and social isolation on emotional, neuropsychologic, functional and physical health; and, possibility of a remote monitoring of drug treatment in dementia.
GeroCovid will provide original information on three main aspects of COVID-19 pandemic, i.e. SARS-CoV-2 infection onset, clinical course of Covid-19, and effects on health status of people in advanced age, including individuals at risk not affected by COVID-19.
As regards the first aspect, i.e. disease onset, the study will help to identify the factors associated with SARS-CoV-2 infection and its heterogeneous presentation. So far, inconsistent data have emerged about symptoms variability in older adults at disease onset [3,24]. GeroCovid will focus on potentially atypical COVID-19 presentations and on their associated prognostic value. An interesting contribution of GeroCovid will be provided by the involvement of long-term care facilities, which are among the major reservoirs of the frailest older population. The GeroCovid LTCFs cohort will, therefore, give useful insights on the disease onset in such individuals, as well as on the most effective preventive measures to be implemented.
With regard to clinical course, GeroCovid will use multicenter information from two European countries to recognize factors heralding a faster and more severe disease progression. The extraction of data from different geographical contexts, even at the national level, will provide a picture of the various therapeutic approaches adopted in the past months based on local guidelines and resources' availability. Special attention will also be paid to investigate factors influencing the management of the disease, and to evaluate which therapeutic approaches may lead to better outcomes based on individual characteristics.
Finally, GeroCovid assesses the effects of the pandemic on the health of both COVID-19 patients and older people at risk of Covid-19 in specific care settings, considering physical, mental and social well-being, in accordance with the WHO Constitution [25]. This step of the project will provide important insights not only for physicians but also for healthcare systems and societies, which have to address the emerging needs of individuals in the post-acute phase of the disease [26,27]. Moreover, GeroCovid will focus on the impact of the pandemic on non-Covid-19 geriatric patients. Indeed, there is evidence that the pandemic could have affected care pathways, including cognitive status, functional abilities and psychological health of frail older people, with a special focus on dementia. Recently published data suggest that during lock-down a rapid increase of behavioral symptoms and of stress-related symptoms were observed in more than half of dementia patients and caregivers [28]. As social distancing measures are likely to last for several months, due to persistence of pandemic, it is important to quantify these emerging needs and to assess the ability of health services, including remote telemonitoring, to address them. These aspects will be investigated primarily in the GeroCovid home and outpatients' care, GeroCovid dementiadrug treatment, and GeroCovid dementiapsychological health cohorts.
The multi-setting results of GeroCovid will contribute to developing new evidence-based recommendations promoting the prevention and management of Covid-19, and the optimization of care provision for older patients even in such emergency situations. Therefore, the potential impact of the study is not only addressed to the health of individuals, but also to the healthcare system. Indeed, evidence emerged from GeroCovid will improve the clinical management of COVID-19, possibly optimizing resource allocation and increasing readiness of healthcare and public health systems to front possible future pandemics. This will improve the resilience of healthcare systems that, in many cases, demonstrated an insufficient capacity to maintain overall efficiency in response to the current outbreak [27]. In addition, GeroCovid will inform on the burden of the disease that, even in its post-acute phase, may raise new care and assistance needs associated to non-negligible costs at the familiar, societal, and healthcare system levels. Older adults may be especially vulnerable to the consequences of COVID-19 as a disease that can alter the labile balance between multiple chronic conditions and treatments, with a negative effect on physical, mental, and functional well-being. In this sense, COVID-19 pandemic can be considered as a prototype of a stressful scenario for the frailest individuals, for our societies and healthcare systems. Consequently, information and possible solutions derived from GeroCovid will go beyond the ongoing pandemic and might be applied to future crisis of similar or different nature.
One obstacle that may influence the achievement of the expected goals of GeroCovid concerns the potential heterogeneity of data coming from different settings of care. At this regard, the GeroCovid coordinating group agreed on minimum core information shared by all study cohorts. Another possible limitation of the project is the recruitment limited to people aged 60 years or older, which will not allow GeroCovid to get a complete picture on the risk/protective factors, clinical course and outcomes of COVID-19 across all age groups, from young to older individuals. However, focusing on advanced age, the study will explore a broad set of key aspects of COVID-19 just in the part of the population that has been most burdened by the pandemic.
Conclusion
The multi-setting, multi-purpose and multicentric GeroCovid initiative is a unique opportunity to explore relevant aspects of COVID-19 with high-quality and comprehensive data on the health of individuals aged 60 years or older. This project will help to optimize COVID-19 prevention and management, with practical implications for ongoing and possible future pandemics.
Funding sources
This research did not receive any funding from agencies in the public, commercial, or not-for-profit sectors.
Declaration of Competing Interest
None. | 2021-02-02T17:33:36.421Z | 2021-01-31T00:00:00.000 | {
"year": 2021,
"sha1": "cfcc01bcb610e6fd07d60d197d5153342275d168",
"oa_license": null,
"oa_url": "http://www.ejinme.com/article/S0953620521000170/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "93a192377160c8343e2c9ee9f637ae141944a458",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
24194362 | pes2o/s2orc | v3-fos-license | Identification of a novel SCAN box-related protein that interacts with MZF1B. The leucine-rich SCAN box mediates hetero- and homoprotein associations.
The SCAN box or leucine-rich (LeR) domain is a conserved motif found within a subfamily of C(2)H(2) zinc finger proteins. The function of a SCAN box is unknown, but it is predicted to form alpha-helices that may be involved in protein-protein interactions. Myeloid zinc finger gene-1B (MZF1B) is an alternatively spliced human cDNA isoform of the zinc finger transcription factor, MZF1. MZF1 and MZF1B contain 13 C(2)H(2) zinc finger motifs, but only MZF1B contains an amino-terminal SCAN box. A bone marrow cDNA library was screened for proteins interacting with the MZF1B SCAN box domain and RAZ1 (SCAN-related protein associated with MZF1B) was identified. RAZ1 is a novel cDNA that encodes a SCAN-related domain and arginine-rich region but no zinc finger motifs. Co-immunoprecipitation assays demonstrate that the SCAN box domain of MZF1B is necessary for association with RAZ1. By yeast two-hybrid analysis, the carboxyl terminus of RAZ1 is sufficient for interaction with the MZF1B SCAN box. Furthermore, MZF1B and RAZ1 each self-associate in vitro via a SCAN box-dependent mechanism. These data provide evidence that the SCAN box is a protein interaction domain that mediates both hetero- and homoprotein associations.
The SCAN box or leucine-rich (LeR) domain is a conserved motif found within a subfamily of C 2 H 2 zinc finger proteins. The function of a SCAN box is unknown, but it is predicted to form ␣-helices that may be involved in protein-protein interactions. Myeloid zinc finger gene-1B (MZF1B) is an alternatively spliced human cDNA isoform of the zinc finger transcription factor, MZF1. MZF1 and MZF1B contain 13 C 2 H 2 zinc finger motifs, but only MZF1B contains an amino-terminal SCAN box. A bone marrow cDNA library was screened for proteins interacting with the MZF1B SCAN box domain and RAZ1 (SCAN-related protein associated with MZF1B) was identified. RAZ1 is a novel cDNA that encodes a SCAN-related domain and arginine-rich region but no zinc finger motifs. Co-immunoprecipitation assays demonstrate that the SCAN box domain of MZF1B is necessary for association with RAZ1. By yeast twohybrid analysis, the carboxyl terminus of RAZ1 is sufficient for interaction with the MZF1B SCAN box. Furthermore, MZF1B and RAZ1 each self-associate in vitro via a SCAN box-dependent mechanism. These data provide evidence that the SCAN box is a protein interaction domain that mediates both hetero-and homoprotein associations.
Zinc finger genes encode an abundant class of DNA-and RNA-binding proteins that represent an estimated 5% of the genes in the human genome. Many C 2 H 2 zinc finger genes have been demonstrated to function as transcriptional regulators and frequently, zinc finger genes are targeted for disruption in a variety of human diseases and cancers. The Krü ppel-like subclass of mammalian C 2 H 2 zinc finger proteins, first identified in the zinc finger transcription factor TFIIIA, share a conserved link between the last histidine of the preceding finger motif with the first cysteine of the next finger (H-C link) (1). Krü ppel-like proteins often contain conserved modular do-mains outside of their zinc finger motifs. These identified domains include the KRAB (Krü ppel-associated box) domains A and B, FAX (finger-associated box) domain of Xenopus, BTB/ POZ (broad complex, tramtrack, and bric-a-brac/poxvirus and zinc finger) or ZiN (zinc finger N-terminal) domain, and the SCAN box or leucine-rich domain.
To date, the functions of the KRAB and BTB/POZ domains have been best characterized. The KRAB domain is a conserved stretch of 75 amino acids found in an estimated one-third of Krü ppel-like zinc finger proteins (2). The KRAB domain, further subdivided into domains A and B, functions as a potent transcriptional repressor (3)(4)(5) and is predicted to fold into two amphipathic helices (2). The KRAB domain from KOX1 interacts with human TIF1 (also named KAP-1, KRAB-associated protein-1) (6, 7) and appears to exert its transcriptional repression activity through this interaction (6,7). In addition, the KRAB-A domain of Kid-1 interacts with KRIP-1 (KRAB-A interacting protein), which is likely to be the murine homologue of TIF1 and KAP-1 (8). The POZ domain defines a conserved region of approximately 120 amino acids and is found in 5-10% of zinc finger proteins. The POZ domain is a protein interaction motif (9) that mediates both homo- (10,11) and heterodimerization (12). Several POZ proteins are transcriptional repressors, including the oncoproteins PLZF (13) and BCL-6 (14), and the POZ domain has been shown to function as an autonomous transcriptional inhibitory domain (15). The POZ domain also has been demonstrated to interact with the co-repressors N-CoR, SMRT, Sin3, and histone deacetylase (16 -19), suggesting that POZ-containing proteins mediate transcriptional repression by recruiting histone deacetylase through a co-repressor complex. However, this may not be a general mechanism for POZ-containing transcription factors (20).
The leucine-rich (LeR) domain is a conserved motif present in the amino terminus of a subfamily of C 2 H 2 zinc finger proteins (4). This motif has also been designated the SCAN box, which was derived from the first four proteins found to contain this domain (SRE-ZBP, CT-fin-51, AW-1, number 18 cDNA) (21). To date, the SCAN box has been identified in approximately 20 zinc finger proteins from human, mouse, and rat including ZNF174 (21), RLZF-Y (22), FPM315 (23), ZNF213 (24), MZF1B 1 and its murine homologue, MZF-2 (25). SCAN box domains are about 80 residues in length, and approximately two-thirds of the amino acids are highly conserved with 80 -100% sequence identity. The function of the SCAN box is unknown. Based on protein sequence analysis, the SCAN box is predicted to form two or three amphipathic helices that may be involved in protein-protein interactions (4,21 (4,21), suggesting that the SCAN box is not an independent transcriptional regulatory domain.
To gain a better understanding of the SCAN box function, we directed our attention to the SCAN box-containing zinc finger protein, MZF1B. 1,2 MZF1B is an alternatively spliced isoform of the zinc finger transcription factor, MZF1 (26) and the human homologue of murine MZF-2 (25). MZF1 is a 485-amino acid protein that contains 13 C 2 H 2 zinc finger motifs arranged in a bipartite DNA binding domain. The consensus DNA binding sites have been identified (27), and MZF may regulate the expression of specific genes in a tissue-specific manner (28,29). MZF1 expression is both necessary for hematopoietic cell differentiation (30) and critical to the regulation of cell proliferation and apoptosis (31)(32)(33). MZF1B cDNA encodes a 734-amino acid protein that shares identity to the carboxyl terminus of MZF1, including the 13 C 2 H 2 zinc finger motifs. However, MZF1B encodes an additional 257 residues at its amino terminus, which contains a SCAN box domain. Therefore, the aminoterminal domains unique to each isoform may define the distinct functions of MZF1 and MZF1B proteins. We hypothesized that the MZF1B SCAN box is a protein interaction domain. A human bone marrow cDNA library was screened for proteins interacting with the MZF1B SCAN box domain, and RAZ1 (SCAN-related protein associated with MZF1B) was identified.
Full-length MZF1B cDNA (amino acids 1-734) and the unique amino terminus of MZF1B (amino acids 1-257) were amplified by polymerase chain reaction (PCR) 3 methods with oligonucleotides 1 and 2 or oligonucleotides 1 and 3, respectively, and subcloned into the EcoRI and HindIII sites of pcDNA3.1 (A) to produce plasmids MZF1B-mh/pcDNA and MZF1B N-term-mh/pcDNA, where "mh" indicates fusion of the protein to the carboxyl-terminal Myc-His epitope tag.
The unique amino terminus of MZF1B (amino acids 1-257) lacking the Myc/His tag was constructed by subcloning the same PCR fragment of MZF1B N-term-mh/pcDNA into pcDNA3.1 (C) to create MZF1B N-term*/pcDNA, which encodes an extra three amino acids (Lys, Ala, Thr) at the carboxyl-terminal end.
Full-length MZF1 cDNA (amino acids 1-485) was PCR-amplified with oligonucleotides 9 and 2. The PCR fragment, flanked by BglII and HindIII, was subcloned into the BamHI and HindIII sites of pcDNA3.1 (A) to create MZF1-mh/pcDNA. The cDNA isolated from yeast two-hybrid screening was removed from the pACT2 expression vector by digestion with BglII (including the hemagglutinin (HA) tag) and subcloned into the BamHI site of pcDNA3.1 (A) to generate ha-RAZ1/pcDNA. -gal-mh/pcDNA, an expression plasmid encoding Myc-His epitopetagged -galactosidase, was purchased from Invitrogen (Carlsbad, CA).
Yeast Two-hybrid Expression Plasmids-MZF1B and RAZ1 cDNA were subcloned into yeast expression cloning vectors, pAS2-1 and pACT2 (CLONTECH; Palo Alto, CA), to generate fusion proteins with the GAL4 DNA binding domain (BD; amino acids 1-147) or GAL4 To generate the bait plasmid, SCAN/pAS2-1, the cDNA encoding the MZF1B SCAN box (amino acids 47-123) was amplified by PCR using oligonucleotides 10 and 11 and subcloned into the EcoRI and HindIII sites of the mammalian expression vector CB6ϩ (34) to create SCAN/ CB6ϩ. SCAN/CB6ϩ was digested with EcoRI and BamHI and the MZF1B SCAN box insert was subcloned into pAS2-1.
To generate fusions of the MZF1B SCAN box with the GAL4 AD and HA tag, the cDNA encoding the MZF1B SCAN box (amino acids 47-123) was amplified by PCR using oligonucleotides 12 and 13 and subcloned into the EcoRI and XhoI sites of pACT2 to create SCAN/pACT2. RAZ1/pACT2 is the library clone isolated from the yeast two-hybrid screen as described below. This clone contains the RAZ1 cDNA (amino acids 1-217) library insert subcloned into the EcoRI and XhoI sites of pACT2 to generate a fusion protein containing the GAL4 AD and HA tag.
GAL4 Yeast Two-hybrid Analysis-The MATCHMAKER Two-Hy-brid System 2 and GAL4 Human Bone Marrow MATCHMAKER cDNA library were purchased from CLONTECH. A large scale, sequential transformation of the GAL4-SCAN bait fusion (SCAN/pAS2-1) and bone marrow cDNA library was carried out according to the manufacturer's directions. Briefly, the bone marrow cDNA library (50 g) was TABLE II Identification of RAZ1, a novel cDNA library clone that interacts with the MZF1B SCAN box a Number of yeast colony-forming units/g of plasmid DNA (cfu/g) growing on leucine and tryptophan-depleted medium: cfu ϫ total suspension vol (l)/vol plated (l) ϫ dilution factor ϫ amount DNA used (g) ϭ cfu/g of DNA.
Data Base Searching-Computer searches were done using the FASTA, BLAST, and MOTIFS algorithms through the Wisconsin Package software (64) or BLAST version 2.0 through the World Wide Web interface. Nucleotide sequences were compared with entries in the GenBank TM or expressed sequence tag (EST) data bases, while peptide sequences were searched against the Protein Information Resources (PIR) or Swiss-Prot data base.
Chromosome 20 Sequence-The Homo sapiens clone RP5-1121G12 from the RPCI5 library maps to chromosome 20q11.1-11.23 and has been assigned the EMBL/GenBank TM accession number AL109965. These data were produced by the Human Chromosome 20 Mapping and Sequencing Groups at the Sanger Center. Mapping and sequence data can be obtained on the World Wide Web.
RESULTS
The MZF1B SCAN Box Interacts with RAZ1-A human bone marrow cDNA library was screened by yeast two-hybrid analysis for potential MZF1B SCAN box interacting proteins, and one clone was identified (Table II). To confirm the positive protein interaction, we performed two-hybrid assays in the presence and absence of the MZF1B SCAN box domain. The library plasmid did not autonomously activate reporter gene expression, and a positive protein interaction was only ob- served when both the MZF1B SCAN box and library plasmid were co-transformed into yeast (Table II). To determine whether the interaction was an artifact of the fusion protein partner, we switched the AD and the DNA BD fusion partners for both the MZF1B SCAN box and interacting clone. The interaction between the MZF1B SCAN box and the isolated library clone was not dependent upon the fusion protein partner (Table II). These control experiments confirm that we identified a cDNA library insert positive for MZF1B SCAN box protein interaction. We have named this protein, RAZ1, a SCAN-related protein associated with MZF1B.
RAZ1 Is a Novel SCAN Box-Related Protein-The cDNA and amino acid sequences of RAZ1 are shown in Fig. 1A. The sequence is not found in the GenBank TM data base and appears to be a novel clone. The open reading frame for RAZ1 is defined by fusion with the upstream GAL4 activation domain and encodes 217 amino acids, starting with a glycine and ending with a stop codon at nucleotide position 652. The first methionine is at nucleotide 115 and contains a weak Kozak consensus sequence for translation initiation (37). Thus, it is probable that we have isolated a partial cDNA clone from the GAL4 fusion library that is incomplete at the 5Ј-end. The predicted sequence of RAZ1 encodes a SCAN-related domain at its car-boxyl terminus (amino acids 140 -200) but no zinc finger motifs. We refer to the domain as "SCAN-related" because the alignment with SCAN domains in other zinc finger proteins is conserved at the amino terminus and truncated at the carboxyl terminus (Table III). Approximately 20 SCAN box-containing proteins have been reported and/or deposited into the Gen-Bank TM data base that contain zinc finger motifs and/or KRAB domains (Table III). Thus, the SCAN box appears to be frequently associated with zinc finger motifs and sometimes with KRAB domains. It is possible that RAZ1 is a member of a novel gene family of non-zinc finger SCAN proteins. Immediately following the SCAN-related domain is an arginine-rich region (amino acids 201-217). The RAZ1 open reading frame also contains putative sites for post-translational modification: two casein kinase II phosphorylation sites at amino acid positions Ser 67 and Ser 77 , two protein kinase C phosphorylation sites at positions Thr 101 and Thr 144 , one cAMP-and cGMP-dependent protein kinase phosphorylation site at position Thr 211 , and two N-myristoylation sites at positions Gly 50 and Gly 62 (Fig. 1A).
RAZ1 Maps to Chromosome 20 -During the course of our studies, we identified a GenBank TM -deposited human chromosome 20 sequence at 20q11.1-11.23 with identity to RAZ1 at nucleotides 1-59 and 60 -775. This sequence also contains an additional 108-bp insert between nucleotides 59 and 60 of RAZ1. An illustration of the chromosome clone is shown in Fig. 1B. To further examine the RAZ1 gene, 5 ϫ 10 5 clones from a K562 cDNA library were screened with a RAZ1 cDNA probe. Nine unique clones were isolated, and all are identical to RAZ1, extending from nucleotide 21 and through the poly(A) tail (data not shown). In addition, 3Ј-RACE analysis identified products identical to the 3Ј-end of RAZ1 that correspond to the contiguous genomic DNA sequence on chromosome 20 including the stop codon, polyadenylation signal, and poly(A) tail. We obtained six 5Ј-RACE products identical to RAZ1 at the 5Ј-end, of which one extends from nucleotide 17. Interestingly, one 5Ј-RACE product contains the 108-bp insert between nucleotides 59 and 60 of RAZ1 (data not shown). These data provide independent confirmation that two RAZ1 transcripts may exist with divergence at the 5Ј-end, one that does not contain an additional 108 bp of sequence and one that does.
RAZ1 mRNA Is Expressed in Various Cell Lines-Northern blot analysis detects a RAZ1 transcript of ϳ1 kilobase in total RNA isolated from both human hematopoietic and nonhematopoietic cell lines. The highest levels of RNA expression were detected in the cell lines HEL (erythroleukemia) and Caco-2 (colon adenocarcinoma) (Fig. 2). The blot was reprobed with glyceraldehyde-3-phosphate dehydrogenase (GAPDH) to verify equal loading of RNA (Fig. 2). Comparison of RAZ1 with the EST data base identified 100 ESTs between 300 and 600 bp in length that are identical to RAZ1. The ESTs were isolated from various human tissues including brain, breast, fetal heart, kidney, melanocyte, ovarian tumor, and placenta (data not shown). Six of these EST sequences contain the additional 108-bp insert between nucleotides 59 and 60 of RAZ1. Northern blots and reported ESTs suggest that RAZ1 may be widely expressed.
The MZF1B SCAN Box Domain Is Necessary for Interaction with RAZ1-To confirm that full-length MZF1B associates with RAZ1 and to identify MZF1B domains necessary and sufficient for interaction, MZF and RAZ1 proteins were coexpressed in vitro and co-immunoprecipitated with immunospecific antibodies.
As a first step, we demonstrated the immunospecificity of the nonspecific IgG, ␣-His, ␣-HA, ␣-ZF, and ␣-Myc antibodies by immunoprecipitating in vitro expressed MZF and RAZ1 proteins that contain either the amino-terminal HA or carboxylterminal mh epitope tag. The pcDNA expression plasmids used for immunoprecipitations are shown in Fig. 3. The antibodies were immunospecific, and no cross-reactivity was observed (Fig. 4A). Full-length MZF1B migrates as an 80-kDa protein, while the MZF1B NH 2 terminus migrates at approximately 42 kDa (Fig. 5). MZF1B ⌬SCAN and MZF1 migrate at approximately 72 and 50 kDa, respectively. RAZ1 migrates as a doublet of approximately 35 kDa. We consistently observe that the upper band of RAZ1 is more efficiently immunoprecipitated with ␣-HA. Therefore, the upper band of RAZ1 might be a result of translation initiation at the methionine upstream of the HA epitope tag, while the lower migrating band may be due to internal translation initiation at methionine 38 of RAZ1, thus producing a protein that lacks the HA epitope tag.
Protein association was demonstrated by co-expressing both MZF1B and RAZ1 epitope-tagged proteins in vitro and coimmunoprecipitating with the immunospecific antibodies. MZF1B is detected when the lysate is immunoprecipitated with ␣-HA, and RAZ1 is detected with ␣-His (Fig. 4B). This suggests that full-length MZF1B and RAZ1 are being pulled down in the same immunocomplex and are interacting in vitro. In addition, neither RAZ1 nor MZF1B ⌬SCAN proteins were detected in the same immunocomplex, suggesting that the MZF1B SCAN box is necessary for heteroassociation with RAZ1 (Fig. 4C). Furthermore, MZF1, which lacks the SCAN box domain, does not interact with RAZ1, verifying that the association with RAZ1 is unique to the SCAN box-containing amino-terminal region of The carboxyl terminus of RAZ1 is sufficient for MZF1B SCAN box interaction and RAZ1 self-association a Number of yeast colony-forming units/g of plasmid DNA (cfu/g) growing on leucine and tryptophan-depleted medium: cfu ϫ total suspension vol (l)/vol plated (l) ϫ dilution factor ϫ amount DNA used (g) ϭ cfu/g of DNA.
b The percentage of yeast positive for both HIS3 and lacZ reporter gene expression. MZF1B (Fig. 4D).
The Carboxyl Terminus of RAZ1 Is Sufficient for MZF1B SCAN Box Interaction and RAZ1 Self-association-To identify RAZ1 domains sufficient for MZF1B SCAN box association, we performed two-hybrid assays with the MZF1B SCAN bait plasmid and either the amino terminus or carboxyl terminus of RAZ1 that contains the SCAN-related domain. As a control, we demonstrated that the individual constructs did not autonomously activate reporter gene expression (Tables II and IV). Positive protein interactions occurred when the MZF1B SCAN box domain was co-transformed with full-length RAZ1 or the carboxyl terminus of RAZ1 but not with the amino terminus of RAZ1 (Tables II and IV). This demonstrates that the carboxyl terminus of RAZ1 is sufficient for MZF1B SCAN box interaction, suggesting that the SCAN box domains from both proteins are mediating heteroprotein association.
To test whether RAZ1 self-associates via a SCAN-dependent mechanism, we performed two-hybrid assays using RAZ1 fused to the GAL4 DNA BD and RAZ1 fused to the GAL4 AD. The individual plasmids did not autonomously activate reporter gene expression (Tables II and IV). Co-transformation of both RAZ1 fusion proteins resulted in colonies positive for protein interaction, suggesting that RAZ1 self-associates in vitro (Table IV). While amino acids 1-37 do not appear to be necessary for RAZ1 self-association, the carboxyl terminus of RAZ1 is necessary for self-association (Table IV). This demonstrates that the SCAN-related domain of RAZ1 mediates homo-as well as heteroprotein association. The MZF1B SCAN Box Is Necessary for Self-association-In demonstrating that the SCAN box mediates heteroassociation between MZF1B and RAZ1 proteins as well as RAZ1 homoassociation, we reasoned that the SCAN box might also mediate MZF1B homoassociation. To first determine if MZF1B could self-associate, we performed co-immunoprecipitation assays FIG. 5. The MZF1B SCAN box is necessary for MZF1B self-association. A, full-length MZF1B self-associates. Lanes 1-8, controls for antibody specificity; lanes 9 -12, co-expression and immunoprecipitation of tagged and nontagged MZF1B. The IVT reactions for A and B were scaled down to a final volume of 10 l, and 4.5 l of the lysate was used for immunoprecipitations in a final volume of 250 l. B, the MZF1B NH 2 terminus is sufficient for self-association. Lanes 1-6, controls for antibody specificity; lanes 7-9, co-expression and immunoprecipitation of tagged and nontagged MZF1B NH 2 terminus. C, the MZF1B SCAN box is necessary for self-association. Lanes 1-4, controls for antibody specificity; lanes 5-8, co-expression and immunoprecipitation of nontagged MZF1B NH 2 terminus and tagged MZF1B NH 2 terminus ⌬SCAN. The dotted arrows indicate higher order protein complexes. with epitope-tagged MZF1B and nontagged MZF1B. The Myc/ His epitope tag adds ϳ2 kDa, and the two proteins are distinguishable by size as well as immunoreactivity with the epitope tag-specific antibodies, ␣-Myc or ␣-His. The MZF1B proteins were co-expressed in vitro and co-immunoprecipitated with control IgG, ␣-post-SCAN, or ␣-Myc. In lysates expressing both forms of MZF1B, we detected nontagged MZF1B when epitopetagged MZF1B was immunoprecipitated with ␣-Myc, suggesting that MZF1B self-associates in vitro (Fig. 5A). To test for the possibility of nonspecific binding, we repeated the assays by co-expressing both MZF1B and epitope-tagged -galactosidase. MZF1B did not associate with -galactosidase, supporting our observation that MZF1B self-association is not an artifact of our co-immunoprecipitation conditions 4 (data not shown). In addition, the amino terminus of MZF1B is sufficient for selfassociation, since both epitope-tagged and nontagged MZF1B NH 2 terminus proteins were detected in the same immunocomplex (Fig. 5B). Finally, the MZF1B SCAN box is necessary for MZF1B self-association because nontagged MZF1B NH 2 terminus did not co-immunoprecipitate with MZF1B NH 2 terminus ⌬SCAN (Fig. 5C). It should be noted that we consistently observe higher molecular mass bands of Ͼ200 and ϳ80 kDa in immunoprecipitated lysates expressing MZF1B and MZF1B NH 2 terminus, respectively (Fig. 5, A and B). While the identification of these bands has not been confirmed, they may represent higher order complexes of the 80-kDa MZF1B and 42-kDa MZF1B amino terminus. DISCUSSION RAZ1 Protein Structural Motifs-We have described the identification of RAZ1, a novel human cDNA clone isolated from a yeast two-hybrid screen based on interaction with the MZF1B SCAN box. The function of RAZ1 in unknown, but the predicted sequence contains conserved motifs that provide insight into RAZ1's potential role in regulating transcription factor function. RAZ1 cDNA contains an open reading frame of 217 amino acids with a carboxyl-terminal region homologous to the SCAN box domain conserved in zinc finger proteins. Interestingly, the RAZ1 SCAN box is truncated and lacks the predicted third ␣-helix present in other SCAN box proteins. Thus, we have designated this as a SCAN-related domain. In contrast to other SCAN box proteins, RAZ1 does not appear to encode zinc finger motifs based on the sequence that we have obtained.
The sequences for approximately 20 SCAN box-containing proteins have been reported and/or deposited into the Gen-Bank TM data base (Table III). Of these, seven also encode a KRAB A and/or B domain, and 19 contain carboxyl-terminal zinc finger motifs. This suggests that the SCAN box is frequently associated with zinc finger motifs and sometimes with KRAB domains. The remaining SCAN box proteins that do not contain zinc finger motifs include p18, TRFA, PGC-2, and RAZ1. p18 and TRFA are partial clones that do not contain sufficient sequence to determine the presence or absence of zinc finger motifs. PGC-2 (peroxisome proliferator-activated receptor ␥ coactivator-2) is a murine adipogenic cofactor bound by the differentiation domain of the peroxisome proliferator-activated receptor ␥ (38). PGC-2 encodes a 142-amino acid protein with a carboxyl-terminal SCAN-related domain but no zinc finger motifs. The PGC-2 protein shares 76% identity to RAZ1 (49% at the NH 2 terminus; 97% at the COOH terminus), suggesting that PGC-2 may be the murine homologue of RAZ1.
It is possible that RAZ1, PGC-2, and potentially p18 and TRFA represent a novel gene family that contain SCAN box domains but lack zinc fingers. Similarly, the SSX gene family contains KRAB domains without zinc fingers. In addition to the KRAB domain, the SSX gene family encodes a novel transcription repression domain at the carboxyl terminus, SSXRD. Interestingly, this SSXRD domain exerts stronger repression than the KOX1 KRAB domain, and the KRAB-related domain fails to interact with the co-repressor TIF1 (KAP-1) (50 -52). Therefore, the protein binding and repression function of the SSX genes that contain KRAB domains and lack zinc fingers appears to be different and distinct from KRAB proteins that contain zinc fingers. Thus, SCAN proteins that lack zinc fingers may contain other conserved domains that modify or define their function. RAZ1 contains an arginine-rich region of 16 amino acids at the carboxyl terminus, immediately following the SCAN-related domain. Short (10 -20-amino acid) arginine-rich sequences have been shown to mediate DNA and RNA binding as well as nuclear localization. The arginine-rich domain found in the amino terminus of the recombination activating gene, RAG-1, exhibits DNA binding activity (53). In addition, the arginine-rich motifs of HIV Rev and Tat proteins, bacteriophage N, 21 N, and P22 N mediate RNA binding (reviewed in Ref. 54). Specifically, the human immunodeficiency virus (HIV) Rev binds to the Rev response element of HIV-1 as an ␣-helix and facilitates the nuclear export of unspliced HIV pre-mRNAs (reviewed in Ref. 54). There is an increasing amount of evidence that the arginine-rich domains present in HIV Rev, Tat, and human retroviruses T-cell leukemia virus type 1 also function as direct importin -dependent nuclear localization signals (55,56). Thus, the arginine-rich domain of RAZ1 may mediate DNA-RNA binding and/or function as a nuclear localization signal. Localization to the nucleus would place RAZ1 in the same cellular environment as zinc finger transcription factors, and nucleotide binding activity may allow RAZ1 to compete with other zinc finger SCAN proteins for DNA-RNA binding sites. In addition, several putative phosphorylation and N-myristoylation sites reside within RAZ1, suggesting that the function of RAZ1 may be regulated by posttranslational modifications.
RAZ1 Gene Structure and mRNA Expression-The isolated RAZ1 cDNA clone is 775 bp in length and contains a putative translation initiation start site, stop codon, and polyadenylation signal (5Ј-AAU GAA AAA-3Ј). Several ESTs, cDNA library clones, and RACE products share identity to RAZ1, suggesting that we have identified a bona fide transcript. In addition, some of the EST and 5Ј-RACE sequences contain an additional 108-bp insert between nucleotides 59 and 60 of RAZ1. It remains to be determined if both transcripts are expressed in vivo. Northern blot analysis detects an ϳ1-kilobase RAZ1 transcript in both hematopoietic and nonhematopoietic cells, and ESTs from various tissues share identity to RAZ1, suggesting that the RAZ1 gene is expressed in a variety of tissues. Based on RACE, cDNA library clones, ESTs, and chromosome 20 sequence, it is likely that we have obtained the complete 3Ј-end of the RAZ1 transcript and are within a few hundred nucleotides of obtaining the entire 5Ј-end. We scanned the chromosome 20 sequence and found that the open reading frame upstream of RAZ1 continues for 128 amino acids and contains a methionine with a weak Kozak consensus (Fig. 1B). However, further analysis is needed to confirm the complete 5Ј-end of the transcript.
Interestingly, the RAZ1 gene is localized to chromosome 20q11.1-11.23. Deletion of the long arm of chromosome 20, most often 20q11.2-13, is associated with myeloid disorders, particularly myeloproliferative disorders, myelodysplastic syn-drome, acute lymphocytic leukemia, and acute myelogenous leukemia (57)(58)(59). This suggests that the genetic loss on chromosome 20q may provide a proliferative advantage to myeloid cells, possibly through the loss of a tumor suppressor gene. In addition, an increased copy number of DNA sequences from chromosome 20q has been observed in pancreatic cancers (60) and breast carcinomas (61), and trisomy of chromosome 20 is associated with the progression of papillary renal cell carcinomas (62). This indicates that a gain of chromosome 20q may facilitate uncontrolled cellular proliferation, possibly through the aberrant expression of an oncogene.
The RAZ1 chromosome 20q11.1-11.23 location raises the question as to whether loss or gain of RAZ1 contributes to any disorders associated with the locus. The myeloid proliferative disorders associated with loss of chromosome 20q are particularly interesting because MZF1/1B appears to be an important regulator of hematopoietic differentiation and proliferation (30 -33). Therefore, RAZ1 or MZF1B may be a tumor suppressor gene, and the interaction between RAZ1 and MZF1B may be necessary to elicit a tumor suppressor function. Thus, a genetic loss of RAZ1 might block tumor suppressor activity, thereby providing a proliferative advantage to hematopoietic cells.
The SCAN Box Function-Co-immunoprecipitation and yeast two-hybrid analyses demonstrate that MZF1B and RAZ1 associate in vitro via a SCAN box-dependent mechanism. In addition, the SCAN box domains are necessary for MZF1B and RAZ1 self-association. Therefore, we have demonstrated that the SCAN box is a protein interaction domain that mediates both hetero-and homoprotein associations. These findings suggest a novel cascade mediated by unique SCAN box protein complexes. To define cellular cascades regulated by SCAN box protein interactions, it will be necessary to identify in vivo SCAN box oligomers and the unique functions elicited by each complex. The identification of mechanisms regulated by SCAN box protein complexes will significantly impact our understanding of the transcriptional role of SCAN box zinc finger proteins and their associated factors.
The transcriptional activity of the SCAN box-containing zinc finger proteins MZF-2 and ZNF174 has been examined. Fulllength murine MZF-2 does not activate reporter gene expression, but a truncated form of MZF-2 markedly enhances transcription (29). The transcriptionally active form of MZF-2 contains the SCAN box domain, but the SCAN box is not necessary for transcriptional activity. ZNF174 is a transcriptional repressor of reporter genes driven by the human tumor growth factor- and platelet-derived growth factor-B promoters (21). The complete amino terminus of ZNF174, including the SCAN box domain, transcriptionally represses reporter gene expression when fused to a heterologous DNA binding domain, but the SCAN box domain is not sufficient for transcriptional repression. The ZNF174 repression domain is probably present within the remaining amino-terminal portion, and the SCAN box may or may not modify this function. Thus, the SCAN box does not appear to function as a transactivation or repression domain. These conclusions are supported by our personal observations 5 and reports by Pengue et al. (4) and Williams et al. (21), which show that the SCAN box does not confer transactivation or repression function onto a heterologous DNA binding domain. While the SCAN box is not an independent transactivation or repression domain, the SCAN box may function to recruit co-repressors and transactivators necessary for transcriptional regulation.
MZF1B Function-We identified RAZ1 as a potential in vivo protein interaction partner with the SCAN box-containing zinc finger protein MZF1B, the human isoform of MZF1 previously identified by Peterson and Morris 1,2 . MZF1 was initially identified as a zinc finger transcription factor necessary for granulocytic differentiation and critical to the regulation of cell proliferation and apoptosis (30 -33). In contrast to previous reports, both MZF1B and MZF1 mRNA transcripts are expressed in numerous tissues. 2 Therefore, previous reports addressing MZF1 function may have been indirect measurements of MZF1B function. Thus, MZF1B may function as an important regulator of granulocytic differentiation, cell proliferation, and apoptosis. The interaction between MZF1B and RAZ1 might be necessary for mediating MZF1B function, or RAZ1 may modify intrinsic MZF1B function. It is also possible that other SCAN box proteins compete with MZF1B for binding to the same protein, thereby providing a transcriptional regulatory mechanism based on the sequestration of specific factors and availability of protein partners. Furthermore, MZF1B and RAZ1 each self-associate in vitro, suggesting that each protein may participate in the formation of unique complexes with distinct functions. Thus, the transcriptional activity of MZF1B is probably mediated by specific protein-protein interactions with RAZ1, MZF1B, and other SCAN box proteins. Identifying the in vivo MZF1B protein partners and their effect on MZF1B activity will provide insight into the possible mechanisms by which MZF1B functions to transcriptionally regulate cell development. | 2018-04-03T01:26:00.240Z | 2000-04-28T00:00:00.000 | {
"year": 2000,
"sha1": "d41cc9852b18dcab24817280bb3dbf5b3540a4ca",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/275/17/12857.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "e8b43112fbbd9c958d04aa66ed2e265592755af0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
260910975 | pes2o/s2orc | v3-fos-license | A Multi-Spectral Image Database for In-Vivo Hand Perfusion Evaluation
The increasing prevalence of vascular diseases encourages the development of minimally invasive approaches to assess tissue perfusion. A significant challenge facing current state-of-the-art methods is their validation against clinical data. In this study, we introduce an open-source database designed to evaluate tissue perfusion during the application of an occlusion protocol. The database comprises sequences of multi-spectral images (visible and near-infrared region) from the subjects’ predominant hand and their photoplethysmography data for validation. Our study recruited 45 healthy participants, including 21 females, with an age range between 18–24 years old (standard deviation equal to 1.73). The database was evaluated using two methods for estimating skin perfusion parameters based on multi-spectral images: a Kubelka-Munk model, and a linear regression. Meanwhile, for validation purposes, the changes in oxygenated and deoxygenated hemoglobin were evaluated by photoplethysmography data as baseline perfusion parameters. The Pearson correlation between plethysmography-based perfusion parameters and those extracted from multi-spectral images was evaluated in all cases as a validation metric. Our findings demonstrated a strong Pearson correlation ( $\rho >0.7$ ) between changes in oxygenated and deoxygenated hemoglobin and multi-spectral based perfusion parameters, suggesting that the database is useful for further research related to in-vivo perfusion assessment. The primary objective of this database is to provide open-source data from a controlled occlusion protocol to evaluate new approaches based on multi-spectral images in the visible and near-infrared regions. In addition, the validation by photoplethysmography data facilitates the development and assessment of innovative tissue perfusion estimation techniques.
I. INTRODUCTION
The human body runs on oxygen, nutrients and immune factors, which are transported by the circulatory and lymphatic systems, in a process known as tissular perfusion.
The associate editor coordinating the review of this manuscript and approving it for publication was Zhen Ren .
Poor blood perfusion may cause problems such as ischemia, and additional complications may lead to organ damage or even failure. Impaired blood flow also affects wound healing [1], which can lead to infections in open wounds. In fact, this scenario is critical for diabetic patients, whose blood vessels in the lower extremities are usually affected by their condition [2].
At clinical level, physicians pay attention to variables that reflect the general state of perfusion throughout the body. It is common practice to evaluate temperature, skin color, and even perform some simple tests to assess capillary refill time by applying pressure to a fingernail [3]. Modern methods available may require the use of contrast agents, such as thermography, laser speckle contrast analysis [4], and indocyanine green fluorescent imaging [5]. In some cases, techniques such as magnetic resonance imaging [6], computer tomography [7], laser doppler imaging [8], multispectral optoacoustic tomography [9] and surface electrode approaches [10] can provide information about regional perfusion. There are perfusion tests available for specific organs, such as myocardial perfusion imaging, cerebral oximetry, renal scintigraphy, and hepatic vein characterization, just to name a few [11]. Nonetheless, there is a growing interest in the development of minimally invasive methods for perfusion estimation in large tissue regions. Such methods aim to characterize tissular perfusion at multiple positions up to the microcirculatory level [12]. Non-invasive imaging tests are generally safe, painless, and require minimal preparation. They serve as an initial screening tool and offer valuable information about organ function and health. However, they may have limitations in terms of resolution and sensitivity compared to invasive tests.
In this context, a common approach is to use imaging techniques, either reflectance or absorbance of light, from different portions of the spectra to measure a perfusion parameter. For instance, PulseCam [13] estimates maps of the pulsatile component (AC) of blood flow in the skin by using an RGB camera and reference photoplethysmography (PPG) values. The authors in [13] test their method by applying occlusion on healthy participants (vascular occlusion at 70 mmHg, and total occlusion at 140 mmHg) using a blood cuff on the arm. They evaluated a total of 12 participants with various skin tones (Fitzpatrick skin types I to V [14]).
Another approach using visible spectra is detailed in [15]. The proposed method estimates a video output corresponding to variations in finger blood perfusion on non-Euro-Americans subjects. Authors validate their approach by identifying ischemia in 10 volunteers, who underwent an occlusion test. This test lasted 10 min, 3 min without pressure, and 7 more with a tourniquet-induced occlusion.
Spectral imaging techniques capture radiation across multiple wavelengths, they are not restricted to the visible spectrum [16]. This includes infrared, multispectral, and hyperspectral modalities. These techniques can reveal properties not discernible with standard imaging approaches. As such, spectral imaging can provide insight into the biochemical composition of samples in a non-invasive manner. A primary challenge lies in extracting meaningful information from the large volumes of data generated through spectral imaging [17], [18]. Originally developed for remote sensing purposes [19], advancements in technology have enabled the proliferation of spectral imaging into diverse fields. These include precision agriculture [20], food quality evaluation [21], and medical applications [22], among others. Perfusion imaging represents one novel and promising application [23]. By noninvasively measuring physiological information, perfusion imaging may allow for the evaluation of organ function and disease monitoring.
The proposal in [24] employs multi-spectral imaging (MSI) and compares their results against tissue oxygen saturation (StO 2 ) from near-infrared reflectance spectroscopy (NIRS). The authors measure local tissue desaturation and reperfusion during two consecutive vascular occlusion tests. However, no detailed information exists on the methodology for estimating the MSI perfusion parameters. A total of 58 volunteers participated in this study, and the subject's systolic pressure was used as a control parameter. The authors induced a total occlusion by applying 30 mmHg above systolic pressure, and then the cuff was released until oxygen saturation was below 40%, according to NIRS measurements. Pearson's correlation was used to evaluate the level of agreement between the MSI and NIRS perfusion parameters. According to their results, the correlation was moderate (r = 0.42).
Other approaches are based on the combination of different systems, such is the case of [25]. In this study, the authors propose a laser speckle incorporated multispectral system to estimate StO 2 and a relative blood perfusion parameter. The multispectral channels in the green portion of the spectrum (530-570 nm) were utilized, and a model based on the Extended Beer Lambert Model was fitted using light attenuation. The authors assessed the application of this approach for monitoring the healing progression of skin grafting in patients with diabetic ulcers. Over a span of two years, approximately four sessions were conducted. The results were validated by comparing the outcomes between individuals with type II diabetes and foot ulcers, and a healthy control group. The participants were divided into two groups: one with positive healing and the other with impaired healing. However, the findings related to StO 2 revealed only a small mean absolute difference in comparison to the control group.
Hyper-spectral (HS) images is another technique that can measure reflectance data in the visible and near-infrared (VIS-NIR) range but with more wavelength bands available. These systems are also non-destructive, representing a great option for biomedical applications. Such is the case of invivo tumor boundary delimitation [26]. The authors built a database of HS images during neurosurgery procedures. They collected 36 HS images from 22 participants. The images were labeled by neurosurgeons to identify four classes of tissue: normal, primary, and secondary cancer, and a fourth class containing blood-vessels and background elements. In [27], the authors employ a commercially available system for clinical use that can record images with a spatial resolution of 640 × 480, and 100 spectral channels with a processing time of around 30 seconds. However, many of these approaches rely on prior information such as absorption and scattering coefficients [28]. In addition, they do not actually use all the available spectral information. The methods proposed in [29] can estimate perfusion parameters in a clinical setting such as StO 2 , hemoglobin, and water indexes. In this research line, the work in [30] established a key contribution, where the authors applied two different models and their inverses to obtain perfusion parameters. They used Markov-chain [31] and Kubelka-Munk models to estimate skin's parameters [32]. Hence, a hyper-spectral input image was used to estimate concentrations of melanin, blood volume, and blood-oxygen fractions along with the depth of the skin layers.
The main problem in estimating perfusion parameters through imaging methods is the lack of a proper validation. In-vivo validation is challenging due to ethical considerations, variability between populations and the sample size, as well as limited control of experimental conditions. Several diagnostic tests that, although they may be specific, are highly invasive, such as arterial blood gas analysis [33]. This exam requires a blood sample, from which oxygen levels, pH, and other information about tissue perfusion can be extracted. These tests are the gold standard for the calibration of oximetry devices [34]. To evaluate perfusion, another option is the application of occlusion tests to induce ischemia. These tests consist of the temporal restriction of blood flow to an area of interest. When applied to a limb, a simple tool such as a blood cuff, a tourniquet, or even a rubber band can be used for blood vessel blockage. When the pressure applied only restricts the blood flow in the veins, it is called venous occlusion. A total occlusion occurs when the blood circulation stops completely. When the blockage is released, the restoration of blood flow is called reperfusion. This condition can lead to hyperemia, which is a rapid and exaggerated reperfusion to the organs affected by ischemia [35]. The changes in oxygen levels might be useful to validate reperfusion parameters. However, perfusion parameters based on pulseoximetry might not register accurate readings during blood occlusion [36]. In contrast, it is possible to correctly measure these abrupt oxygen changes using NIRS or PPG readings [37].
Thus, our work introduces an open-source database for evaluating perfusion parameters in an upper limb. The database comprises sequences of multi-spectral images of the hand palm and PPG data from the thumb. The latter is used for validation purposes through the estimation of baseline perfusion parameters (changes in oxygenated and deoxygenated hemoglobin). The data is recorded in-vivo during the application of a occlusion protocol, inducing changes in the dominant hand palm for approximately 10 min. for each subject. In addition, we have conducted an initial evaluation of the data by using two well-known regression techniques: the Kubelka-Munk method and a linear model for monitoring skin perfusion parameters. These methods provide valuable insight into the data and can be used to validate perfusion MSI-based techniques.
The rest of the manuscript is organized as follows. A description of the experimental protocol and hardware, as well as the details of the processing algorithms and the validation stage, are provided in Section II. The characteristics of the database, and the results obtained to estimate hemoglobin changes during the induced hyperemia, are described in Section III. To conclude, the results are discussed, and the final remarks are presented in Section IV
II. METHODOLOGY
In this section, we describe the experimental protocol used to generate the database, as well as the hardware employed for PPG and MSI data acquisition. We present the methodology to estimate different perfusion parameters with both acquisition approaches. Hence, the proposed methodology is summarized in Fig. 1.
A. EXPERIMENTAL PROTOCOL
This work aims to generate an open-source database to evaluate changes in tissue perfusion. To do so, we reproduced the protocol from [36], to induce key changes in blood oxygen levels. This protocol uses a blood pressure cuff to partially occlude the blood flow in an upper limb. This protocol is considered safe and was performed in line with the principles of the Declaration of Helsinki. We followed the guidelines of our institution for experiments involving human subjects and submitted the protocol to be reviewed by the Ethics Committee ''Comité Institucional de Bioética'' of the Universidad Autonoma de Aguascalientes in Mexico. The protocol was approved with the code: CIB-UAA-37.
Furthermore, the protocol was explained to each participant, and they were required to sign an informed consent to be included in this study. Participants were excluded according to the following criteria: any individual with a history of vascular disease or chronic conditions such as diabetes mellitus and hypertension was not eligible to participate. In addition, those with skin infections or abnormalities were also excluded to maintain the integrity of the study and to ensure an accurate assessment of tissue perfusion.
Before starting the protocol, the participants were given a minimum of 10 minutes to rest in a room set to ambient temperature. All testing was conducted between 9:00 and 15:00 hours, ensuring consistency. Each measurement stage was carried out in a unified laboratory setting, deliberately isolated from sunlight. The participants were seated and their superior limbs extended on a table. The blood pressure cuff, as well as all sensors, were placed in their dominant hand, which was recorded by the multi-spectral camera. A pulseoximeter was placed on the index finger while the PPG device on the thumb. These sensors and their cables were covered with black pasteboard to avoid reflections on the camera. First, the participant's systolic and diastolic pressures were sampled. Each experiment lasted 10 min. and the protocol was divided into five stages of two min. each one. During the first stage, data acquisition begins and no pressure is applied through the sphygmomanometer. The vascular occlusion (VO) stage starts at 2:00 min., where a fixed and constant 60 mmHg pressure is applied manually using the blood pressure cuff. At the beginning of the rest stage (4:00 min. mark), the pressure is released, and no pressure is applied. The total occlusion (TO) stage starts at 6:00 min., where pressure is constantly applied for the whole two min. This pressure is set to 20 mmHg above the registered systolic pressure for the participant. At the 8-min. mark, the pressure is released at the hyperemia stage, where the subject is allowed to rest. The experiment ends at ten min.
B. PHOTOPLETHYSMOGRAPHY DATA
Photoplethysmography data is used to obtain baseline perfusion parameters. For this goal, the PPG sensor MAX30102 is used [38]. This device is a transmittance PPG sensor that emits light using red (660 nm) and infrared (880 nm) LEDs. The light is sampled with a photodetector with a spectral range of sensitivity between 600 and 900 nm. The PPG sensor was controlled using an Arduino Mega microcontroller, through the I 2 C interface. The sampling rate was fixed to 80 Hz. The components DC and AC were obtained according to the methodology in [39]. First, the supply voltage interference is filtered from both, red and infrared PPG channels. Then the signal peaks are localized to identify each cycle in the PPG measurements. The DC component is removed from the pulse baseline by a lowpass filter. Once the DC component is extracted from each PPG signal, the AC component is calculated as the difference between maximum and minimum values in a single PPG cycle. The components AC and DC of each PPG signal are the basis for estimating multiple perfusion parameters [40].
We estimated the ratio of absorbances defined as: where sub-index Red represents the PPG component recorded at 660 nm, and IR represents the measurement at 880 nm. There are several models in the literature to estimate peripheral capillary oxygen saturation (SpO 2 ), in this work we used the following definition from [39]: The perfusion index measures the relationship between the AC and DC components [41]. In this work, we employed a definition based only on the IR measurement [40], according to the formula: Given the DC signal for each PPG channel, the light attenuation was calculated as: where the index (0) represents the initial measurements during the protocol. In this work, we employed the average of the first 100 samples for each DC signal in every experiment. Next, we employ the solution proposed by [36] to estimate baseline perfusion parameters by the changes in oxygenated hemoglobin [HbO 2 ] and deoxyhemoglobin [Hb], which were calculated as: where the molar extinction coefficients for each molecule and wavelength (ε Hb Red , ε HbO 2 Red , ε Hb IR , ε HbO 2 IR ) were taken from [42]. As a result, these parameters were set to ε Hb (6) and (7) represents the distance between the light emitter and the detector, while DPF is the differential path factor. This data is not available for the MAX30102 sensor. Therefore, to estimate the changes proportional to [absolute concentration] × [optical pathlenght], we followed the methodology by Abay et al. [36], [37].
The full-width half-maximum values for the camera channels are 26,24,25,25,27,28,31,34 nm. The 9th channel records the average response from the other 8 channels. However, this information was not employed in our study. The scene was illuminated with a 150 W halogen light (Fiber-lite Mi-150 Illuminator Series, DolanJenner Industries, Boxborough, MA, USA). The camera was equipped with a polarizer (PS1000 VIS/SWIR Wire Grid Linear Polarizer Film), and the raw spatial resolution of each spectral image is 339 × 426 pixels. The camera was set to record multi-spectral images at a rate of 4.10 Hz with an exposure time of 16.70 ms.
At the processing stage, the images were cropped to 320 × 400. A mask was calculated to process pixel positions corresponding only to the subject's hand. To do so, we calculated the maximum value of the Euclidean norm of each image along the spectral dimension, and for every pixel with a value lower than 25%, this position was masked. Furthermore, pixels in the boundary regions of the limb were removed by applying morphological erosion with a disk kernel of ratio three. The set of all available pixels in the mask for a multi-spectral image is denoted as P ⊂ Z × Z.
A multi-spectral image at pixel position p and wavelength channel λ is denoted as I (p, λ), where p ∈ P and λ ∈ . The reflectance at channel λ and pixel p is obtained by a normalization step: where I W (p, λ) and I D (p, λ) denote the corresponding white and dark reference images. In this work, we employed a polytetrafluoroethylene (PTFE) plate to generate the white reference I W (p, λ) [43]. The dark reference I D (p, λ) was captured by taking images with the lens cap on. Some methods to estimate perfusion parameters are based on the absorbance, which is defined as:
D. ESTIMATION OF MSI PERFUSION PARAMETERS
To demonstrate the value of the presented database, we evaluated two methods to estimate perfusion parameters based on regression techniques and multi-spectral images. These methods use reference spectral responses, i.e., tabulated spectral absorption coefficients measured in laboratory conditions or approximations [28], [44]. We analyzed a linear model based on absorbance [45], [46], [47], [48], and a non-linear model [30], [32], [49], [50] which is based on reflectance images. For this evaluation, we quantified perfusion parameters related to the contribution of hemoglobin in oxygenated HbO 2 and deoxygenated Hb forms using the MSI data. The results obtained were contrasted against the baseline PPG perfusion parameters. The spectral absorption coefficients at λ wavelength channel were approximated [50] as: where G represents the weight in grams per liter, and M is the gram molecular weight of hemoglobin. In these experiments, we set G = 150 g/l and M = 64, 500 g/mol [50]. The values for the molar extinction coefficients e HbO 2 and e Hb in [cm − 1/(moles/liter)] were taken from [44] at the closest values tabulated for our wavelength channels in . The most common chromophore present in human skin is melanin, whose spectral absorption coefficient can be approximated by the next equation [32], [50]:
1) LINEAR-MODEL
This model considers a minimal contribution of chromophores other than HbO 2 and Hb, and it has been used to evaluate oxygenation changes in hands occlusion [46], tumors [48] and validated in-vivo in an animal model [45]. According to this model, the estimated absorbance of incident light A LM at channel λ is a linear combination of the chromophores: A LM (C HbO 2 , C Hb , α, λ) = C HbO 2 · µ a.HbO 2 (λ) where (C HbO 2 , C Hb , α) are scaling coefficients. In this work, we employed the absorption coefficients µ a.HbO 2 and µ a.Hb from eqs. (11) and (12), respectively. This model concentrates the contribution from other chromophores in the bias term α. Given a subset of channels A ⊂ , we estimate the optimal parameters (C HbO 2 , C Hb , α) at each pixel p ∈ P by minimizing the following cost function: where A(p, λ) is the sampled absorbance from the multispectral camera in (10).
2) KUBELKA-MUNK MODEL
The Kubelka-Munk model was designed to describe light interactions in a multi-layer medium [51]. It is employed to estimate light reflectance based on the spectral absorption and scattering coefficients, and thickness of the materials. When applied to human skin, these layers correspond to the epidermis and dermis. The former contains melanin and other minor chromophores such as bilirubin, collagen, keratin, and carotene. However, melanin is the most abundant chromophore in the human skin, while the rest only present a minor contribution in healthy subjects. At wavelength channel λ, the optical absorption coefficient of the epidermis layer µ a.epi (λ) is characterized as: where f mel is a free parameter. In (16), we employ the spectral absorption of melanin µ a.mel (λ) defined in (13), and for the VOLUME 11, 2023 baseline µ a.baseline (λ), we employ the definition from [32] and [50]: Since the dermis contains blood vessels, this layer also presents hemoglobin-based chromophores. At wavelength channel λ, the dermis spectral absorption coefficient µ a.der (λ) is defined as: µ a.der (λ) = f blood · (C oxy · µ a.HbO 2 (λ)) where f blood and C oxy are free variables. In the case of the scattering coefficients of both layers, we employ the definition of [50], where they are considered the sum of the Mie and Rayleigh scattering coefficients at λ channel: (21) According to the Kubelka-Munk model, the absorbances from eqs. (16) and (18) and the scattering coefficients from (21) determine the amount of light moving in two opposite directions within the skin layers. The backward flux K and the forward flux variables β of each layer are defined as K epi (λ) = µ a,epi (λ) µ a,epi (λ) + 2µ s,epi (λ) (22) K der (λ) = µ a,der (λ) µ a,der (λ) + 2µ s,der (λ) (23) β epi (λ) = µ a,epi (λ) µ a,epi (λ) + 2µ s,epi (λ) (24) β der (λ) = µ a,der (λ) µ a,der (λ) + 2µ s,der (λ) .
The reflectances R epi , R der and the light transmitted from the epidermis to the dermis T epi have the following expressions (26)- (28), as shown at the bottom of the next page, where variables D der and D epi represent the thickness of each layer in the skin. The total reflectance [32], [50] measured at the surface of the skin R KM is a function of the form Given a fixed set of wavelengths R ⊂ , the Kubelka-Munk models the light based on the absorption and scattering coefficients. Consequently, the total reflectance can be considered a function of the parameters given a set of frequencies The model parameters are estimated by fitting a reflectance sample (9) to the Kubelka-Muk reflectance model, such as equation (3) in [50]. The cost function used to identify the perfusion parameters is described next: where R(p, λ) is the sample reflectance from the multispectral camera in (9). Hence, the optimal parameters f mel , f blood , C oxy , D dermis , D epi are obtained at each pixel p ∈ P by minimizing the cost function in (31).
E. COMPARISON AND VALIDATION
The open-source database presented in this work consists of a video sequence of multi-spectral images and PPG data recorded in-vivo from multiple subjects. The PPG data serves as a reference for estimating perfusion parameters from the thumb. These values were compared against MSI perfusion parameters from the fingertip of the middle finger.
We selected these locations based on the accuracy of the perfusion parameters measured in these positions, such as SpO 2 [52]. Due to the 10 min. duration of the occlusion protocol (see Fig. 1), maintaining a static posture for the subjects is challenging. Consequently, we implemented a tracking algorithm to monitor a region of interest (ROI) around the middle fingertip in the multi-spectral images and compare it with the thumb PPG data. This section elaborates on the ROI tracking and on the evaluation of the models for monitoring skin perfusion.
1) ROI AND TRACKING
In this study, we employed the Kanade-Lucas-Tomasi (KLT) feature tracking algorithm to track the fingertip movements of the participants [53]. We used the point tracker implementation from Matlab (Mathworks, Inc., Natick, Massachusetts, U.S.; v2020b). Initially, we manually selected a rectangular ROI surrounding the middle fingertip in the first frame for each participant video sequence. The features employed for tracking were corners detected using the features from the accelerated segment test (FAST) algorithm [54]. We conducted tracking on all participants every four frames throughout the entire experimental protocol. The ROI obtained from the tracking algorithm was multiplied with the energy mask (as described in Section II-C) to isolate and process only the middle fingertip region for estimating the MSI perfusion parameters.
2) EVALUATION OF REGRESSION METHODS
We estimated skin perfusion parameters by fitting the linear and Kubelka-Munk models to the input absorbance and reflectance signals, respectively. The perfusion parameters (C HbO 2 , C Hb , α) in (14) were estimated using least squares regression. In a similar fashion to [45], we set all negative solutions to zero, as no constraints were applied. Meanwhile, to solve the regression problem in (31) for the perfusion parameters f mel , f blood , C oxy , D dermis , D epi , we employed a particle swarm optimization method. We used the Matlab implementation provided with the optimization toolbox. The boundaries employed for each perfusion parameter in the Kubelka-Munk model are detailed in Table 1. The optimization method was configured to work with a swarm size of 50, 20 maximum simulations, and an error tolerance of 10 −6 . All the signal processing was implemented in a Dell Precision 3660 workstation, equipped with a 12th generation Intel Core i7-12700K processor, and 16 GB of RAM. The preprocessing stages, as well as the implementations, were performed in Matlab. In this way, there are three perfusion parameters that can be estimated from (15), and five from (31). However, we could not obtain reference values for melanin or other chromophores, nor the thickness of the skin layers. Therefore, we only perform a particular analysis of perfusion parameters related to hemoglobin, namely (C HbO 2 , C Hb , f blood , C oxy ). This study aims to evaluate if these parameters correlate with the measurements obtained by the PPG sensor.
In these identification processes of skin perfusion parameters, one important challenge is the selection of wavelength bands A and R . The regression methods in (15) and (31) are sensitive to prior information, since the spectral absorption and scattering coefficients are functions of the available wavelength bands .
In this work, we perform an analysis to evaluate the Pearson correlation between MSI and PPG perfusion parameters during the application of the occlusion protocol. Our goal is to select the wavelength channels with better correlation against the PPG reference data, for both the linear model in (14) and the Kubelka-Munk in (30
III. RESULTS
The open-source database presented in this work comprises records from 45 subjects who provided informed consent. The age of the participants ranged from 18 to 24 (mean = 20.17, SD = 1.73), with a majority being righthanded (44/45) and having Fitzpatrick Skin Type III (26 participants) and Type IV (19 participants). The information for each participant is summarized in Table 2.
Sequences of the multi-spectral images and PPG data from all recruited subjects are accessible in the following repository in Zenodo https://doi.org/10.5281/zenodo.7860900. The database contains at least 2,445 MSI for each participant, see Fig. 2. Each multi-spectral image consists of the nine single channels in PNG format, as detailed in subsection II-C. We opted for the PNG format due to its lossless compression and user-friendly metadata management, particularly for nontechnical users like those in the medical field. This choice aligns with our goal of creating a database accessible to multidisciplinary research teams. Full hand and finger masks obtained from the tracking process are available. The raw R der (λ) = 1 − β 2 der (λ) × e K der (λ)D der (λ) − e −K der (λ)D der (λ) (1 + β der (λ)) 2 e K der (λ)D der (λ) − (1 − β der (λ)) 2 e −K der (λ)D der (λ) (27) T epi (λ) = 4β epi (λ) data in the experimental protocol, where the table presents information from each subject, identifiable by a unique code. Age, gender, and skin type (as per the Fitzpatrick scale) are reported. The systolic pressure recorded before the start stage of the protocol is also presented for each subject. In the following columns, the number of multi-spectral images per patient, each composed of nine channels, along with the length of data collection in minutes, seconds, and milliseconds are detailed. The last column indicates the occurrence of reactive hyperemia, marked by a decrease in [HbO 2 ] and an increase in [Hb] during the TO stage, as well as an inverse trend following the pressure release. The participants who presented high movement during the experiment are marked with a symbol *. VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. Infographic of the data available online upon request [55]. Raw data files for all the 45 participants, including PPG files and reference MSI data are available for the evaluation of novel perfusion parameters estimation methods.
87550
PPG data is also included as simple comma-separated value files. Reference white material MSI data is also available to test different calibration methods.
Examples of multispectral images obtained for a single participant, at two different times (start and TO stages), during the occlusion protocol are displayed in Fig. 3. The perfusion parameters estimated at each location of the MSI can be arranged to obtain perfusion maps. These images provide spatially localized quantitative information that can assist physicians in diagnosing and monitoring tissue conditions without a biopsy. Examples of perfusion maps obtained for a single patient at different stages of the occlusion experiment are displayed in Fig. 4. While changes in C HbO 2 , C Hb , and f mel are evident, the magnitude of these changes is measured by evaluating the correlation between the values obtained from the middle finger and the reference PPG signals from the thumb.
For the initial evaluation of the database, the participants who presented excessive movement during the protocol (P17, P38, and P39) were excluded. Additionally, subjects who did not exhibit hyperemia during the occlusion protocol were also excluded (participants: P4, P5, P18, and P22). Consequently, the validation experiments presented in the subsequent subsections are based on data from 38 participants.
A. PPG PERFUSION PARAMETERS
The measurements obtained are shown in the Figs. 5 to 7. Figure 5 A) illustrates the range and mean values for the estimations of AC Red and AC IR by (4) and (5) in the thumb, respectively. The observed data aligns with measurements reported by Abay et al. [36], wherein a considerable decrease in AC values occurs during total occlusion (8:00 to 10:00 min). As depicted in Fig. 5 B), the DC components are also affected by the occlusion stages, with considerable intersubject variability, particularly in the red component. Both DC Red and DC IR components exhibit a decline during the VO stage. However, during TO, DC Red decreases, while DC IR increases above nominal values.
The resulting ratio of absorbances R PPG in (1) is presented in Fig. 5 C). This perfusion parameter displays high variability throughout the entire experiment, with multiple peaks occurring even in stages without blood cuff pressure. As observed during the protocol, the average value of R PPG rises during occlusion stages at 2:00-4:00 min. (VO) and 6:00-8:00 min. (TO).
The SpO 2 range and mean values, as calculated from equation (2), are displayed in Fig. 6 A). This perfusion parameter also exhibits variability in the absence of applied pressure. It is worth noting that, according to the literature [56], values below 70% for SpO 2 are considered unreliable. This threshold is reached during occlusion stages, which is why this parameter is not included in the evaluation against MSI perfusion parameters. Next, Fig. 6 B) illustrates the estimation of PI IR by (3). This perfusion parameter is sensitive to reperfusion occurring after each occlusion stage, particularly around 4:00 and 8:00 min. During total occlusion, the values of PI IR drop considerably. Figure 7 presents the changes in hemoglobin contribution [HbO 2 ] and [Hb] by (6) and (7), respectively. These values exhibit low inter-subject variability, particularly during the start (0:00-2:00 min.) and rest (4:00-6:00 min.) stages. The sampled signals are tolerant to occlusion protocols, as reported by Abay et al. [37]. Both range and mean signals decline during the VO stage and return to normal during the resting one. During TO (6:00-8:00 min.), the mean value of [HbO 2 ] decreases, while [Hb] increases. Upon the release of blood cuff pressure, the hyperemia stage (8:00-10:00 min.) is characterized by a rapid increase in [HbO 2 ] levels and a decrease in [Hb]. Hence, the changes in these PPG perfusion parameters were utilized for the evaluation of MSI parameters.
B. MSI PERFUSION PARAMETERS CORRELATION
In this preliminary analysis, we sought to validate the effectiveness of MSI data for assessing the estimation of in-vivo perfusion parameters during an occlusion protocol. According to the literature [40], and the obtained PPG measurements (see Fig. 7, the parameters least prone to inconsistencies during an occlusion are [HbO 2 ] and [Hb]. Nonetheless, the estimated signals for each parameter display a similar trend throughout the initial three stages of the protocol (start, VO and rest). In fact, during the VO stage (2:00-4:00 min.), both [HbO 2 ] and [Hb] tend to increase and subsequently revert to a baseline state during the rest stage (4:00-6:00 min). To avoid potential inaccuracies in the Pearson correlation-based evaluation, we focused on assessing the correlation outcomes for the signals [HbO 2 ] and [Hb] during the latter half of the experiment, starting from the 5:00 min. mark until the end.
1) CORRELATION BETWEEN [HbO 2 ] AND MSI PERFUSION PARAMETERS
The [HbO 2 ] reference measurements were obtained from the thumb of each participant, based on the PPG measurements (see Fig. 1). They were estimated by (6) and are shown VOLUME 11, 2023 in Fig. 7. We evaluated the Pearson correlation of this signal against the perfusion parameters (C HbO 2 , C Hb , f blood , C oxy ) from the linear and Kubelka-Munk models. These parameters were extracted from a ROI surrounding the middle fingertip, which was tracked throughout the occlusion protocol. The correlation calculated for the 38 participants, during the rest, TO, and hyperemia stages, employing the wavelength subsets in (32) and (33) are illustrated in violin plots in Figs. 8 to 11 [57].
First, the hemoglobin perfusion parameters (C HbO 2 , C Hb ) from the linear model in (14) exhibited mostly strong negative correlation values, with medians ranging from -0.5 to -0.7, as shown in Figs. 8 and 9. Nevertheless, a strong positive correlation was observed for C HbO 2 with a median of 0.8, using the subset 2 A . This value represented the highest correlation with [HbO 2 ] among all the MSI perfusion parameters evaluated. Next, we evaluated the blood perfusion parameters (f blood , C oxy ) from the Kubelka-Munk model in (18). The estimated values of f blood for different subsets R show weak negative correlations, see Fig. 10, with median values around -0.3. Meanwhile, according to the findings in Fig. 11, only three configurations yielded moderate to strong positive correlations with C oxy .
2) CORRELATION BETWEEN [Hb] AND MSI PERFUSION PARAMETERS
The correlation results for the PPG parameter [Hb] in (7) and shown in Fig. 7, in relation to MSI perfu-87552 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. (6) and (7), respectively, for the 38 participants included in this analysis. These perfusion parameters are tolerant to total occlusion and illustrate the hyperemia and reperfusion phenomena around 08:00 min. multiple implementations with C oxy from the Kubelka-Munk model, see Fig. 15. However, these results are consistent with the results reported for [HbO 2 ] in Fig. 11. This is, the results obtained for [Hb] have an opposite sign to those obtained for [HbO 2 ]. 87554 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
IV. DISCUSSION
In this study, we developed an open-source database to measure changes in hemoglobin concentrations by a sequence of multi-spectral images. These changes were induced by a controlled occlusion protocol that lasts 10 min. During the protocol, MSI data from the hand palm and PPG measurements from the thumb were simultaneously recorded. The database comprises records from 45 test subjects who provided informed consent. The database can be accessed upon request via Zenodo [55]. We also conducted a preliminary evaluation of the database. Our analysis of the PPG measurements revealed certain parameter failures during the occlusion stages, particularly concerning SpO 2 . The findings corroborate those of Abay et al. [36], demonstrating that [HbO 2 ] and [Hb], as estimated from PPG sensors, are sensitive to occlusion stages and capable of tracking phenomena such as reperfusion and hyperemia after blood cuff pressure release.
In an initial evaluation of the MSI data, we tested two regression approaches for estimating perfusion parameters. These methods are based on prior knowledge, particularly spectral absorption and scattering coefficients for the most prevalent chromophores in human skin. Our results confirmed strong correlations, both positive and negative, between PPG and MSI-based perfusion parameters. The preliminary outcomes indicate that MSI-based perfusion parameters can effectively measure changes in both oxygenated and deoxygenated hemoglobin. Additionally, the database provides valuable data for validation purposes, which is often challenging to obtain experimentally and is available for evaluating alternative MSI-based methodologies under a standardized protocol and controlled conditions. We anticipate that this database will be useful in validating novel methods based on MSI for in-vivo estimation of perfusion parameters. Finally, the study population is young and representative of the Mexican inhabitants, which exhibits minimal variation in skin phenotypes. However, the sample does not consider younger or older subjects. Owing to acquisition limitations, the evaluation was conducted using only a subset of the actual data. A total of 38 subjects with clean PPG data and observable hyperemia following the total occlusion stage were included in the analysis. Future research will be committed to the development of practical perfusion monitoring in clinical settings. Moreover, we aim to estimate perfusion parameters without prior information, such as assumptions about the sample population. | 2023-08-16T15:04:39.346Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "49542d8cc88f7f704332d015bbd2a73f951bb55d",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/10216961.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "7e98b696600bd79e11f3a5d39c9e76f21c22fea6",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
257057489 | pes2o/s2orc | v3-fos-license | Borate-guided ribose phosphorylation for prebiotic nucleotide synthesis
Polymers of ribonucleotides (RNAs) are considered to store genetic information and promote biocatalytic reactions for the proto life on chemical evolution. Abiotic synthesis of ribonucleotide was successful in past experiments; nucleoside synthesis occurred first, followed by phosphorylation. These abiotic syntheses are far from biotic reactions and have difficulties as a prebiotic reaction in reacting chemicals in a specific order and purifying intermediates from other molecules in multi-steps of reactions. Another reaction, ribose phosphorylation followed by nucleobase synthesis or nucleobase addition, is close to the biotic reactions of nucleotide synthesis. However, the synthesis of ribose 5′-phosphate under prebiotically plausible conditions remains unclear. Here, we report a high-yield regioselective one-pot synthesis of ribose 5′-phosphate from an aqueous solution containing ribose, phosphate, urea, and borate by simple thermal evaporation. Of note, phosphorylation of ribose before the nucleoside formation differs from the traditional prebiotic nucleotide syntheses and is also consistent with biological nucleotide synthesis. Phosphorylation occurred to the greatest extent in ribose compared to other aldopentoses, only in the presence of borate. Borate is known to improve the stability of ribose preferentially. Geological evidence suggests the presence of borate-rich settings on the early Earth. Therefore, borate-rich evaporitic environments could have facilitated preferential synthesis of ribonucleotide coupled with enhanced stability of ribose on the early Earth.
www.nature.com/scientificreports/ synthesis of biologically relevant ribose-phosphates (i.e., ribose 5′-and 3′-phosphate) under plausible prebiotic conditions remains elusive. Urea 3, a simple amide molecule, has been used as an efficient abiotic catalyst for the phosphorylation of nucleosides 8,19 . In addition, previous research reported that urea 3 could be a nucleobase precursor on the prebiotic Earth 20,21 . Urea 3 is formed via hydrolysis of cyanamide that can form from ammonia and cyanide 19 . Both ammonia and cyanide can be formed by impact-induced reactions 22 . These reactions would have been common on the Hadean Earth when the impacts of large meteorites and asteroids were more frequent than today 22,23 . Thus, it is reasonable to assume urea as a prebiotically available molecule on the early Earth. However, the effects of urea 3 on the phosphorylation of sugars have not been investigated. Therefore, we investigated the phosphorylation reaction of aldopentoses in the presence of urea 3.
Sugars are readily decomposed by heating. In particular, ribose is the least stable aldopentose 24 . Previous studies have found that several oxyanions can improve the stability of sugars 13,25,26 . In particular, borate 7 can strongly stabilize ribose 1 compared to other aldopentoses 27,28 . This indicates that borate-rich environments could have been advantageous for nucleotide formation as ribose 1 could have accumulated in such environments. Furthermore, borate 7 is known to control the phosphorylation site of nucleosides 29 . The effects of borate 7 on other steps involved in prebiotically plausible nucleotide synthesis have been reported in many previous studies 6,30,31 . Therefore, we investigated the effects of borate 7 on the phosphorylation of ribose 1 and other aldopentoses under evaporation.
Results
Here, we show the regioselective synthesis of ribose 5′-phosphate 6 from ribose 1 and dissolved phosphate 2 in the presence of urea 3 and borate 7 under simple drying conditions (Fig. 1). A neutral aqueous solution containing d-ribose, disodium monophosphate, urea, and boric acid (pH 8) was dried by heating at 80 °C for 24 h. The precipitates were hydrolyzed in acidic water (pH ~ 1) at 90 °C for 1 h and analyzed using high-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS). The formation of significant amount of ribose 5′-phosphate 6 (i.e., 22 mol% yield on average; n = 3; 1σ = ± 2.5) in the reactions containing boric acid was confirmed by its LC retention time, MS/MS fragmentation pattern, and 31 P-NMR ( Fig. 2c-e, Figs. S1, S2, and S3). In the reaction with no boric acid, a small amount of ribose 5′-phosphate 6 (i.e., 4 mol% yields) was detected (Fig. 2c). A small amount of ribose 5′-phosphate 6 (i.e., 7 mol% yields) was also detected in the product formed with boric acid before acid hydrolysis. Any ribose phosphate was not formed in the acid condition from ribose and phosphate (Fig. S4). The yield of ribose 5′-phosphate 6 in the reaction without boric acid before acid hydrolysis was < 0.1 mol% (Fig. 2a). In the mass chromatogram corresponding to the molecular mass of ribosylurea phosphate 8, two significant peaks, indicating the presence of ribosylurea phosphate 8, were detected only in the reaction containing boric acid before acid hydrolysis (Figs. 2b and S5). Amines tend to react with aldehydes to form imines. Thus, we envisage that 1′-aldehyde of ribose 1 reacted with urea 3 to form ribosylurea 9 32 .
Discussion
These results indicate that ribose 1 was regioselectively phosphorylated at the 5′-hydroxyl group in the presence of borate 7. The residual fraction of ribose 1 was found to be 19 mol% in the presence of borate 7 and 3 mol% in the absence of borate 7. This shows that borate improved the stability of ribose 1 in the reaction (Fig. S6). Borate 7 forms a complex with a diol facing the same direction. In the case of ribose 1, complex formation with its 1′-and 2′-hydroxyl groups improves the stability of ribose 1 by fixing its form in furanose structure 33 . In the present reaction, urea 3 reacted with 1′-hydroxyl of ribose 1 32 . Thus, borate 7 might have formed a complex with ribose 1 at its 2′, 3′-diol. The complex formation fixed ribose 1 in the furanose form and improved its stability. This led to the high yielding regioselective phosphorylation of ribose 1 at its 5′-hydroxyl (Fig. 3). Phosphorylation at 5′-hydroxyl limits the form of ribose exclusively in furanose, whereas the form of ribose in 2′-phosphate, 3′-phosphate, and 2′,3′-cyclic phosphate can be both furanose and pyranose. www.nature.com/scientificreports/ This one-pot synthesis of ribose 5′-phosphate 6 is a simpler reaction than multistep nucleoside syntheses and indicates that ribose phosphorylation could have occurred before the nucleoside formation on the Hadean Earth, in particular, in borate-rich environments. A previous study showed the formation of α-cytidine nucleotide from ribose 5′-phosphate using cyanamide and cyanoacetylene 34 . Another previous study reported that a photochemical reaction can convert the α-nucleotide to canonical β-nucleotide 4 . Thus, nucleotide formation from ribose 5′-phosphate is possible using small reactive molecules via a photochemical reaction, although the net yield is unclear 4,34 . Thus, the ribose phosphate synthesis shown in this study opens a new route for prebiotic ribonucleotide synthesis. This route is geochemically more plausible than traditional prebiotic nucleotide syntheses and consistent with extant biological nucleotide synthesis.
We further evaluated the phosphorylation of other aldopentoses (i.e., arabinose 10, xylose 11, and lyxose 12; Fig. S7) in the presence of borate 7. The phosphorylated products of aldopentoses other than ribose 1 showed multiple peaks in LC-MS/MS chromatograms, indicating that the phosphorylation occurred at different hydroxyls on each aldopentose (Fig. 4). The extent of phosphorylation was evaluated based on the area under the peak of phosphorylated aldopentoses, assuming that the ionization efficiency of these compounds is similar to that of ribose 5′-phosphate 6 (Fig. S8). Ribose 1 was selectively phosphorylated at 5′-hydroxyl group with the highest yield of 22 mol%, whereas other pentoses were phosphorylated at varied hydroxyl positions with total yields of 11, 8, and 19 mol% for phosphorylated arabinose 10, xylose 11, and lyxose 12, respectively (Fig. 4). Selectivity of phosphorylation was not apparent in the reactions carried out without borate 7 (Fig. S9). Therefore, the borateguided phosphorylation of pentoses shows the preferential synthesis of ribose 5′-phosphate 6.
The residual fraction of each aldopentose was 19, 41, 22, and 16 mol% for ribose 1, arabinose 10, xylose 11, and lyxose 12, respectively, in the presence of borate 7 (Fig S7). Compared to other aldopentoses, the rate of phosphorylation based on the residual amounts of pentoses and their phosphorylated products was found to be higher for ribose 1 and lyxose 12 (Figs. 4, S7, S9, and S10). Ribosylurea 9 and lyxosylurea 13 tend to react with borate 7 at their 2′, 3′-diol and get fixed in their respective furanose forms 33 because borate 7 forms a more stable complex with diols than with single hydroxyl groups 33 . The remaining 5′-hydroxyl group can then react www.nature.com/scientificreports/ with phosphate 2 (Fig. S11). On the other hand, arabinosylurea 14 and xylosylurea 15 might remain in their chain forms when these molecules are combined with borate 7 due to different directions of hydroxyls on these molecules, as compared to those on ribose 1 and lyxose 12 (Fig. S11). Because the chain form of ureido-pentoses can also combine with borate 7 at their 3′, 4′-diol and 4′, 5′-diol positions in addition to 2′, 3′-diol position, the phosphorylation can get partially inhibited by these borate-diol complexes. These differences in the forms of complexes with borate 7 might underly the different phosphorylation efficiencies observed for varied pentoses as well as the highest phosphorylation observed for ribose 1 (Fig. S11). Tourmaline, a boron-rich mineral, has been found in > 3.7-billion-year-old metasediments in Isua, Greenland 35,36 . Further, the likely presence of early evaporitic environments has been reported 37 . These early Archean geological conditions presumably extended back to the Hadean Earth. Borate 7 has been thought to have been present in the Hadean oceans as well; this might have led to borate 7 accumulation in evaporitic environments on the Hadean Earth 38 . Ribose 1 might have preferentially accumulated over other aldopentoses in such environments as borate 7 is known to contribute to the stabilization of ribose 1 13,38 . A higher concentration of carbonate than present is expected in the Hadean ocean, covered by a CO 2 -rich atmosphere. In such an ocean, higher phosphate 2 concentrations than today are expected due to the consumption of Ca 2+ , a cation to form Ca 2+ -phosphate, as Ca 2+ -carbonate [39][40][41][42] . The evaporation processes could have also accumulated dissolved phosphate 2 and urea 3 in the same place as borate 7 and ribose 1 and further induced dehydration reaction leading to the phosphorylation of ribose 1. The present results indicate that the borate-rich evaporitic environment on prebiotic Earth could have enabled the preferential synthesis of ribose 5′-phosphate 6 before the formation of nucleosides.
Previous research reported nucleotide formation from ribose 5′-phosphate 6 using small reactive molecules or amino acids 35,43 , although the nucleotide synthesis through ribose 5′-phosphate 6 needs further investigation. Furthermore, many previous papers reported nucleoside formation from ribose under prebiotically plausible conditions 3,6,30,44,45 . The literature indicates that nucleotide formation using ribose 5′-phosphate 6 could be possible on the prebiotic Earth. Therefore, the one-pot synthesis of ribose 5′-phosphate 6 opens the renounced abiotic route of ribonucleotide formation. This route is more geochemically plausible and analogous to its biosynthesis (Fig. 1). This may provide a geochemical explanation regarding how ribose 1, the least stable aldopentose 24 , became the selected sugar in RNA.
Preparative synthesis of ribose-phosphate and ribosylurea. LC-MS/MS standards for ribose 2′-phosphate and ribose 3′-phosphate were prepared by heating 20 mM of 2′-AMP and 3′-AMP, respectively, at 90 °C for 1 h in 2 wt% sulfuric acid solution to hydrolyze N-glycosidic bond between ribose and urea. LC-MS/ MS standard of ribosylurea phosphate was prepared by heating 20 mM of ribose 5′-phosphate and 80 mM urea at 60 °C for 24 h.
Experiments
The phosphorylation experiments were conducted in Eppendorf tubes. The 20 µL reaction mixture contained 20 mM ribose, 800 mM urea, 40 mM boric acid, and 160 mM disodium phosphate in water. The starting materials were heated at 80 °C for 24 h with the lids of the tubes kept open. After heating, the residues were resuspended in 200 µL of water. Borate was removed from the product by adding 4 µL sulfuric acid (95%) to the resuspended sample solution before LC-MS/MS analysis. For investigating the acid hydrolyzed product, a sulfuric acid solution containing the experimental residue was heated at 90 °C for 1 h with the lid closed. The hydrolysis would have progressed slowly in the prebiotic ocean, where pH was close to neutral, and the temperature was lower than the acid treatment, although the rate of reaction should be lower.
The control experiment in acidic conditions was conducted in Eppendorf tubes. 20 µL of reaction mixture containing 20 mM ribose, 800 mM urea, 40 mM boric acid, and 160 mM disodium phosphate in acidic water adjusted pH ~ 1 with sulfuric acid was dried down at 80 °C. The experimental and analytical procedures were the same as other experiments. For investigating the effect of acid treatment on phosphorylation, the starting material before the experiment was diluted to tenfold with pure water, adjusted at pH ~ 1 with sulfuric acid, and heated at 90˚C for 1 h. HPLC analysis. The separation and detection of the products were conducted using Shimadzu LCMS 8040 (Kyoto; Japan) with the hydrophilic interaction chromatography mode using HILICpak VT-50 2D column (5 μm, 2.0 mm ID, 150 mm length; Shodex). The sample was eluted with isocratic elution with an aqueous solution containing 80% 25 mM ammonium formate and 20% acetonitrile at a total flow rate of 0.2 mL/min at 60 °C. Mass spectrometry was conducted in negative mode with desolvation, source, and heat block temperatures set at 250 °C, 120 °C, and 400 °C, respectively. 31 P-NMR analysis. 31 P-NMR spectra were acquired with Bruker AVANCE III 500 spectrometer. High concentration samples necessary for the NMR analysis were obtained by the following method. The 20 µL aqueous
Data availability
All data are available in the main text and supplementary information. | 2023-02-22T15:21:29.399Z | 2022-07-19T00:00:00.000 | {
"year": 2022,
"sha1": "f64edcecf7b49c9455e255cc9daeb652eb634911",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-15753-y.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "f64edcecf7b49c9455e255cc9daeb652eb634911",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
7013117 | pes2o/s2orc | v3-fos-license | Corprological and haematological parameters of albino mice (Mus musculus) concurrently infected with Heligmosomoides bakeri and Trypanosoma brucei
The effect of concurrent infection with Trypanosoma brucei (T. brucei) and Heligmosomoides bakeri (H. bakeri) was investigated in this study. Thirty adult male albino mice were used for the study. The mice were divided into six groups of five mice each. Group 1 served as uninfected control, Groups 2 and 3 were infected with H. bakeri and T. brucei respectively, Group 4 received both T. brucei and H. bakeri on the same day, Group 5 was experimentally infected with H. bakeri three days after T. brucei infection, while Group 6 was infected with T. brucei three days after H. bakeri infection. Blood and faecal samples were collected and analyzed weekly to determine the faecal egg counts (FEC), packed cell volume (PCV) and level of parasitaemia (LP). Weekly body weights (BW) were also recorded. FEC and parasitaemia increased in all the infected groups during the study, but these were significantly (p<0.05) higher in the multiple-infection (groups 4, 5 and 6) than those with the single infection (groups 2 and 3). The same trend was also observed in the BW and PCV (p<0.05). The level of infection produced by single infection with T. brucei and H. bakeri respectively were similar (p<0.05). All treatment groups were significantly (p<0.05) different from the control group. From the results, it was concluded that concurrent helminth and protozoan parasite infections produced more deleterious effect on the host when compared with single infection with either parasite. However, the pathology produced by concurrent infection was more severe when the host was exposed to the protozoan parasite before the helminth parasite.
Introduction
Gastrointestinal helminthosis is a major militating factor against profitable animal production around the world (Fabiyi, 1979;Chiejina, 1986). The prevalence and severity of gastrointestinal helminth infections have also continued to increase. This could be attributed to an increase in the occurrence of multiple infections involving these helminths and other pathogens in affected flock (Griffin et al., 1981;Goosens et al., 1997). Concurrent infections involving gastrointestinal nematodes and Trypanosoma species are of particular interest as a result of the reported increase in occurrence of mixed infections, especially in trypanosome endemic regions of Africa (Darji et al., 1992). Grazing animals are usually exposed to concurrent infections and the presence of one parasite may affect other parasites within the host system (Nwosu et al., 2006). The present study was therefore designed to further elucidate the pathologic effects produced by mixed infections using Trypanosoma brucei (T. brucei) and Heligmosomoides bakeri (H. bakeri). Such knowledge will have a positive inference for increasing profitability of livestock production ventures in parasite endemic areas.
Experimental animals
Thirty adult inbred male albino mice (Mus musculus) weighing between 28-30 g were purchased from the Faculty of Veterinary Medicine, University of Nigeria, Nsukka, Nigeria. They were kept in rat cages with feed (Vital Feed, Nigeria) and water provided ad libitum. The experimental procedureswere approved by the Ethical committee of Micheal Okpara University of Agriculture, Umudike, Nigeria. The National Institute of Health Principles of Laboratory Animal Care (NRC, 1985) were observed.
Sources of parasites
The H. bakeri used in the experiment was obtained from the Department of Veterinary Parasitology and Entomology, University of Nigeria, Nsukka. The parasites were passaged and maintained in mice. Faecal material obtained from the mice were collected, lightly macerated and centrifuged at 313 xg for 2 minutes. The sediment obtained was reconstituted into a paste and cultured for 10 days at 25ºC. Infective larvae (L 3 ) of H. bakeri were harvested using the modified Baermann technique (Hansen and Perry, 1994). The T. brucei used in the experiment was also obtained from the Department of Veterinary Parasitology and Entomology, University of Nigeria, Nsukka. The parasites were maintained in mice.
Experimental design
The animals were randomly placed in six groups of five animals each and acclimatized for two weeks prior to the start of the experiment. Group 1 served as the uninfected control group, Groups 2 and 3 were infected independently with H. bakeri infective larvae (L 3 ) and T. brucei, respectively. Group 4 was infected with both T. brucei and H. bakeri L 3 on the same day. Group 5 was infected with T. brucei first and after three days with H. bakeri infective larvae while Group 6 received H. bakeri infective L 3 followed by T. brucei three days later. Individual body weights and packed cell volume (PCV) were recorded before the commencement of the experiment and every week subsequently till the end of the experiment. Individual FEC and parasitaemia were also determined and recorded every week from Week 1 post infection till the end of the experiment. The experiment lasted for 10 weeks.
Infection of the mice H. bakeri
The mice were infected orally with 150 H. bakeri L 3 suspended in 200µl of distilled water. The mice were properly restrained before dosing with L 3 and exact volume of the larval suspension were delivered with an automatic micropipette (Finnipipette®; Labsystems Oy, Helsinski, Finland), adapted to take a blunt, slightly curved 18-guage needle as dosing aid (Fakae, 2001).
T. brucei
The mice were inoculated intraperitoneally with 0.2 ml of infected blood containing approximately 1.0 x 10 5 T. brucei/ml.
Faecal egg counts
Weekly faecal egg counts (FEC) determination was carried out on the animals in all experimental groups using the salt floatation method and modified McMaster technique as egg counts increased as described by MAFF (1977).
Determination of the level of parasitaemia
The patency of T. brucei infection was determined by wet film examination of blood from a tail snip by the method of Murray et al. (1983). Parasitaemia was estimated using the rapid matching technique as described by Herbert and Lumsden (1976). PCV The PCV was determined by the Microheamatocrit method. The mice were bled from the tail directly into heparinized capillary tubes.
Body weight determination
The mice were weighed using a desktop balance (Sartorius GMBH Gottingen Germany).
Data analysis
Data obtained were summarized as means ± standard errors and the differences between means determined at the 5% level of significance using Analysis of Variance (ANOVA).
Results
The effect of concurrent infection with T. brucei and H. bakeri on body weight is shown in figure 1. By the second week post infection, there was a significant (p<0.05) decrease in mean body weight in all treated groups when compared with the control group (Group 1). This trend continued till week 10 with the control group showing significant (p<0.05) increase in weight when compared with all infected groups. The decrease in mean body weights observed was also significantly (p<0.05) higher in the multiple infection groups (Groups 4, 5 and 6) when compared with the single infection groups (2 and 3), with Group 5 showing a more marked (p<0.05) decrease than all the groups. There was a marked (p<0.05) drop in PCV of all infected groups (Fig. 2). Mortalities were recorded by the 7 th , 8 th and 9 th weeks for the multiple infection groups (groups 5, 6 and 4) with PCV values of 28±0.58%, 26±0.58% and http://www.openveterinaryjournal.com A.I. Onyeabor et al. Open Veterinary Journal, (2013), Vol. 3(2): 96-100 ________________________________________________________________________________________________________ 98 30±0.00% respectively. Animals in Groups 2 and 3 survived to the end of the study (week 10) although there was a significant (p<0.05) decrease in PCV when compared with the uninfected (Group 1). The H. bakeri infection became patent between days 6 and 7 for Group 2 while for the multiple infection groups, patency was observed between days 2 and 4 post infection. Group 5, however, showed an earlier patency (day 2). Following patency, FEC continued to rise progressively in all infected groups until the end of the study (Fig. 3).
Fig. 3. Egg counts of mice experimentally infected with H. bakeri alone or concurrently with T. brucei and their control.
However, Groups 4, 5 and 6 infected concurrently with T. brucei had significantly (p<0.05) higher egg counts than Group 2 that was infected with only H. bakeri. The effect of infection on parasitaemia is shown in figure 4. The prepatent period of T. brucei infection was 2-3 days. Among the concurrently infected groups, parasitaemia was highest in Group 5.
Discussion
Progressive drop in weight could be attributed to the observed manifestations of typical signs of disease such as reduced food and water intake, reduced activity, sleepiness, low PCV and ultimately, death. In this study, the severity of weight loss was more marked in groups concurrently infected with T. brucei and H. bakeri than in the single infection groups. This agrees with the findings of Faye et al. (2002) and Kaufmann et al. (1992). The shorter pre-patent periods observed in the concurrently infected groups may have occurred due to the additive effects of both parasites in the host, where the presence of one parasite results in a more favorable host environment for the proliferation of the second parasite. This could also be attributed to the suppression of the immune response of the host due to the earlier introduction of the first parasite (Nwosu et al., 2001). This was demonstrated specifically in the group which received T. brucei earlier than H. bakeri. Trypanosomes have been reported to compromise the immune system of affected hosts (Albright et al., 1978;Van Dam et al., 1981). This suppression in immunity could have led to increased pathological effects observed in this group. Anaemia was also observed in all the infected groups in the study. There was a drastic reduction in the PCV of both single and multiple infection groups as both the levels of parasitaemia and FEC increased. Anaemia is a predominant symptom and a reliable indicator for the severity of trypanosome infections (Anosa, 1988). Anaemia is also a major finding in gastrointestinal nematode infections (Steel et al., 1982;Behrens et al., 2001). The anaemia in mice was manifested by varying degrees of reduction beyond pre-infection values of the PCV. The concurrently infected groups which received T. brucei before H. bakeri had an earlier onset and a more severe anaemia, followed respectively by those which received H. bakeri before T. brucei; those which received T. brucei and H. bakeri on the same day; those infected singly with T. brucei and those singly infected with only H. bakeri. This also agrees with the findings of Mbaya et al. (2009) in gazelles concurrently infected with Haemonchus contortus and T. brucei. They concluded that concurrent infection produced more depressing effects on all blood parameters when compared with single infection groups. Similar findings have also been reported by Nwosu and Ikeme (1992) in T. brucei infection in dogs and by Udensi and Fagbenro-Beyioku (2012) in mice. The intensity of the anaemia in the concurrently infected groups may have resulted from a synergistic action of cell injury caused by trypanosomosis (Igbokwe, 1994) and haematophagous activity of H. bakeri (Fabiyi, 1987) leading to a high rate of red cell loss. Therefore, the high rate of red cell loss may be due to the combined effects of a haemorrhagic and haemolytic anaemia related to the presence of both http://www.openveterinaryjournal.com A.I. Onyeabor et al. Open Veterinary Journal, (2013), Vol. 3(2): 96-100 ________________________________________________________________________________________________________ parasites in the host (Dargie and Allonby, 1979). It is noteworthy that the response of the group which received both infections on the same day for both parameters (PCV and FEC) was similar to the effect produced by a single infection with the individual parasites. The results also imply that T. brucei infection superimposed on H. bakeri infection aggravated the damage caused by the helminth parasite. This agrees with the findings of other researchers (Philips et al., 1974;Fakae et al., 1994;Onah and Wakelin, 1999;Chiejina et al., 2005). Also, mortality rates were higher in all groups exposed to multiple rather than single infections. In conclusion, the results showed that concurrent helminth and protozoan infections produced more pathologic effects than single infection with individual parasites. However, the severity of infection increased when the animals were exposed first to the protozoan parasite prior to the helminth parasite as seen by earlier onset and more acute progress of disease. It is therefore recommended that in trypanosome endemic areas, routine screening and prophylaxis for both parasites should be carried out for more effective management and disease control. ___________________________________________ | 2018-04-03T02:49:26.613Z | 2013-09-13T00:00:00.000 | {
"year": 2013,
"sha1": "681e6ee10c41e56720688ebb399ccdbc725849be",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1121633e87607e9042cc1519ef11400c2bd9d0cd",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
234153062 | pes2o/s2orc | v3-fos-license | Are vertical evacuation buildings in Banda Aceh meeting the building standards?
The tsunami vertical evacuation buildings in Banda Aceh were built after tsunami in 2004. The buildings were intended as one of the mitigation strategies for the community that live in the tsunami prone zone area. The buildings should be designed as a robust structure that can resist the earthquake and tsunami impact. Previously, there were no national standard for the design of tsunami resistant structure. The state of damage buildings that had been considered was only based on the result of an earthquake impact, which include the ground shaking, soil failures, and surface fault ruptures. As the Standard in Indonesia did not cover the tsunami load, it is important to review the capability of the existing vertical evacuation in Banda Aceh. The study is aimed to explore the capability of the existing tsunami evacuation buildings as the vertical evacuation in Banda Aceh. The observation includes the tsunami load assessment of the buildings based on the existing standard that had been used internationally. The existing vertical evacuation buildings in this study is analysed for the performance based on the International standard (The Federal Emergency Management Agency FEMA P646 2012) for tsunami loading impact and the earthquake performance was based on the national standard (SNI 1726 2012). The model of the building structure was generated using software structural anaysis, i.e. SAP2000. The earthquake simulation was based on the 2004 earthquake scenario that input as the dynamic loading. The tsunami loading was analysed using the static analyse. The data of tsunami characteristic such as tsunami inundation and speed were using the data from the existing data and research. The results show that the building needs to be strengthening, as it will suffer the large inter-story displacement in the first and second floors.
Introduction
One of the mitigation strategies to reduce the risk in tsunami prone zone is building tsunami vertical evacuation center. It is become an important strategy for reducing the tsunami risk for the flat area where the natural evacuation such as hills and other existing buildings, are not available. This strategy had been applied in Banda Aceh, post tsunami in 2004. The topography of Banda Aceh city is flat. In 2004, the casualties of the tsunami disaster were mostly from this city and Aceh Besar district, especially form Meuraxa sub-district. Four evacuation buildings were built in this sub-district. One of the buildings is also intended as the office for the Tsunami and Disaster Research Center, Universitas Syiah Kuala Banda Aceh, Indonesia. These buildings had been built before Indonesian released the new code for the earthquake resistant design, SNI 1726 -2012 [1]. The difference between the updated standard with the previous standard was the different earthquake map. Since the tsunami vertical evacuation buildings in Meuraxa, Banda Aceh Indonesia was built before 2012, then it is assumed that the standard that been used was SNI [2]. As the buildings is intended as the evacuation center which need to be resistant from the earthquake load, the performance of the buildings should be checked using the recent standard.
Tsunami load is assumed not included in the design. It is because there were no standard of tsunami resistant structures in the Indonesian standard. It is then important to review the capability of the existing vertical evacuation in Banda Aceh. The tsunami loading on buildings standard have been developed, i.e. Guideline for Tsunami evacuation buildings from Cabinet Office government of Japan [3], The Federal Emergency Management Agency (FEMA) P646 [4], and ASCE/Structural Engineering Institute 7 standards committee Tsunami loads and effects chapter [5]. Some researchers had also studied the tsunami loads and effects on designing buildings intensively, namely: Yeh, et. al. [6], Heintz and Mahoney [7], Fukuyama et. al [3], and Chock [5]. To sum up the tsunami loads that need to be considered for Tsunami vertical evacuation and other offshore buildings were hydrostatic load, hydrodynamic load, buoyant, impulsive load, and debris impact. This paper is going to explore the capability of the existing tsunami evacuation buildings as the vertical evacuation in Banda Aceh based on the new earthquake resistant structures (SNI 1726 2012) and the tsunami loads based on the inundation and runup modeling from the existing study [8]. The vertical evacuation building in Lambung, Meuraxa district was observed by modeling the structure using structural analysis software SAP2000. The building is assumed to have similar design from the other two buildings that built as a reinforced concrete with the open structures (frame with very minimal wall infill).
Buildings assessment and modeling
To start modeling the building using structural analysis, the buildings properties were assessed. The building and frame dimensions were measured and sketched. The data and dimension of vertical evacuation buildings in Lambung based on the observations were listed here: Columns : circular column with diameter 700 mm and 500 mm, rectangular beams with 600 x 400 mm and 300 x 200 mm. The buildings data then input into a sketch for SAP2000 analysis. The model of the structures is provided in Figure 1. As could be seen from the figure and data collection, the building meet the category of tsunami evacuation buildings for its type of construction, i.e. reinforced concrete structures frame and the buildings have mainly open structures with minimal wall infill. Fraser [9] observed that the structures of tsunami vertical evacuation with reinforced concrete frames could support the tsunami load effectively for a certain height of tsunami inundation. The design of tsunami vertical evacuation is intended to have an open space for allowing the tsunami flood could pass through the buildings without adding more pressure on the building [10]. The dead load and live load were then input based on the building information. The reference for the live load was the SNI 1727:2013.
Earthquake Loading
The earthquake loading on buildings were developed using the recorded waveform from the Sumatera -Andaman waveform on 26 December 2004 PSI station, the waveform then calibrate into response spectrum using application (DADiSP/SE 6.7). The response spectrum was input into SAP2000 as load with the scale factor = g x I/R, with g = gravitational acceleration (9.81 m/s2), I is the earthquake main factor, which is 1.5 as the building risk categorized number IV (the important building), and R is the earthquake reduction factor equal to 8. The scale factor is equal to 1.84.
Tsunami Loading
The tsunami loading on buildings was consisting of hydrostatic forces, Hydrodynamic forces, impulsive load, debris impact and debris resistant force. Those loads were calculated by using the equations that list in the following table: Forces of the water mass on one side of the structures which had different elevation than the other side
Hydrodynamic
Tsunami load for the wave that pass the building The force that occur due to the impact of the wave front on building structures
Debris Impact force
The force from the impact of the debris that brought by the tsunami wave
Debris resistant force
The accumulation of debris that had been stacked on the structure surfaces )% = The hydrostatic force was input as the load per meter with pyramid shape along the column part in the water. The hydrodynamic, impulsive and debris resistant loads were described as an equivalent load per meter along the wet columns. The debris impact was input as the point load in the column. The different elevation of the soil surface for the surrounding area of the building location was described in Figure 2. The data of the tsunami inundation, the design run up and height of the wave was taken from the existing study of Syamsidik et al [8]. The results of the tsunami wave depth from the study of Syamsidik et al [8] for the Ulelheu surrounding were ranged from 6 (six) to 10 (ten) meter height. That study had been validated to the tsunami inundation data from NOOA and tsunami pole in the city. The simulation was based on the 9.5 Mw earthquakes. The evacuation building was located in 500 m from the shoreline. Based on the simulation data of [8], the tsunami data for the location was summarized as the following: The tsunami data for the location could be summarized as the following: The run up height (R) = 8 meter Building Elevation (z) = 0,9 meter Flow depth = 7,0 meter Gravitation acceleration = 9,81 m/det2 Weight volume of the tsunami run up (ρs) = 1100 kg/m3 Cd = 2 The run up height in the location for 2004 tsunami scenario was 8 m while the height of the first story was 6 m. The required minimum heights for the evacuation buildings is considered as run-up elevations increased by 30% and add another 3 m [5]. Thus the minimum height for Lambung area would at least 13.4 m for the occupied area of the refugee. In other words, the two-floor levels of the building were not capable as the refugee area in the simulation of the 2004 tsunami. The Figure 2. The different elevation of soil surfaces in Banda Aceh [8], the red triangle is the location of Lambung Evacuation Building. hydrodynamic forces were assumed to work on columns, as the structure of the buildings on the first floor was an opening structure without infill wall. The tsunami hydrodynamic force on the column was 67.11 kN/m 2 . The debris impact was assumed as lumber or wood log as the location of the buildings were not far from the shorelines area with less residential area in front the buildings. The forces that act on the column would be 500672.5 N.
Earthquake load assessment
Based on the result of the dynamic response spectrum for the vertical evacuation buildings in Lambung, Banda Aceh using SAP2000, shows that the natural period of building was about 0.2670. The period of the fundamental structures (T) based on SNI-1726 2012 [1] for the building that have less than 12 stories and all of the moment resisting structures made of reinforced concrete structures was taken as Ta = 0,1 N, where N is referred to the number of stories. The period of the building for further calculation was the building natural period (T), which was equal to 0.2670. The modal load participation ratio from the dynamic percent of building-analysis using SAP2000 was not less than 90%, i.e 98.8%. This percentage was in agreement with the requirement of participation mass effective factor, which should be minimum as 90%. The masses and weight of the building is 17371.239 KN. Thus, the horizontal earthquake load (V) was equal to 3419.96 KN.
The base shear of the building due to earthquake load from SAP2000 was less than 0.85 of the base shear from the total weight multiply by the seismic coefficient. This is not in agreement with the national standard of Indonesia (SNI 1726 :2012), then the base shear of SAP2000 results need to enlarged by a scale factor for X and Y direction. The final results of the building base shear due to the simulated earthquake is listed in the Table 2 Table 2
Tsunami loading analysis
The tsunami loading was simulated as the static loads in SAP 2000. It was assumed as the two dimensional load models to the buildings. The tsunami loadings based on FEMA P646 in this simulation was limited to the hydrodynamic loading and debris impact loadings. This is due to the structures of the tsunami evacuation buildings is an open structures. The loadings were calculated using the formula that had been listed in the Table 1. The results shows that the hydrodynamic loading was around 67.11 kN/m 2 and the debris loading that described as the wooden log was around 500672.5 kN.
3.3
Inter-story displacement due to Earthquake and Tsunami load The analysis of the structures performance due to the impact of the earthquake and the tsunami loadings was observed in the inter-story displacement of the buildings. Figure 3 describes story the displacement and inter-story displacement of the building due to the X, Y direction of earthquake load and the static load of the calculated tsunami loadings. Overall the inter-story displacement due to earthquake was still satisfying as it is relatively lower than the allowable interstory displacement. However, the inter-story displacement that occurs due to tsunami load for the evacuation buildings was relatively high than the allowable inter-story displacement. Thus, the building was suggested to have strengthened columns to increase the displacement capacity. The tsunami load that worked mainly in the first story could result in the additional displacement for the columns in that level. The significant inter-story displacement due to the two loads could be seen in the following graph. The flow depth was 7 m on the area, with the design run up was assumed to be 8 m. Then the occupancies area will be at least in the third floor of the buildings. This simulation should be considered for re-calculating the safe area for refugee to occupy. As the building is four stories building, thus the most reliable area to be considered as the evacuation area would be the fourth floor.
Conclusion
Overall the building is considered safe for the 2004 earthquake scenario based on SNI 1726-2012, however the buildings would suffer the large displacement due to tsunami loadings on the first floor and the second floor. Thus, it is important to strengthen the buildings for resisting the large deformation due the lateral impact of the tsunami wave. The available area to be occupied for the 2004 tsunami scenario would be the fourth floor, thus it will need to estimate the effective number of people that could be evacuated in the buildings.
Inter-story displacement (m)
Earthquake X Earthquake Y Tsunami allowable max. inter story | 2021-05-11T00:04:05.200Z | 2021-01-13T00:00:00.000 | {
"year": 2021,
"sha1": "daebc9a2464d8f59b58b7cd0c660c413b33d61c2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/630/1/012006",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "220af268c391a81484db8ab0f6751dd07d286e6b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Geography"
]
} |
55347987 | pes2o/s2orc | v3-fos-license | Effects of blanching time and dehydration condition on moisture and ascorbic acid retention in tender pumpkin ( Cucurbita moschata ) leaves
A study was conducted to assess retention of moisture and ascorbic acid in tender pumpkin (Cucurbita moschata) leaves subjected to different blanching times and dehydration conditions. Equal portions of the leaves were blanched at 5, 10 and 15 min. A half of each portion was dehydrated under a shade and the other under direct sunlight. All the samples were then analysed for moisture and ascorbic acid contents in comparison with those of the raw-leaf sample. The fresh samples (wet basis) had 79.26±5.08 mg/100 g ascobic acid compared to the processed samples, which ranged from 41.10±2.94 to 73.39±5.87 mg/100 g in the blanched shade-dehydrated sample and 17.61±0.00 to 35.23±5.08 in the sundehydrated blanched sample. The results further showed that the samples that were blanched for a shorter time and dehydrated under a shade retained a significantly higher (p<0.05) amount of ascorbic acid compared to those that were blanched longer and dehydrated under direct sunlight. There was no significant difference in moisture content between the shade-dehydrated and sun-dehydrated samples, which were found to be in the ranges of 13.22±0.09 to 14.23±0.27% and 12.45±0.035 to 13.42±0.52%, respectively. It was, therefore, concluded that shorter blanching time and shade-dehydration can retain ascorbic acid in tender C. moschata leaves without compromising moisture content of the product.
INTRODUCTION
Pumpkin plant (Cucurbita moschata), also known as 'tropical pumpkin', is one of the well-known and highly utilised plants cultivated throughout the world, particularly in lowland areas of Asia, Africa and America.Pumpkin plant is unique in a way that almost every part of it (except the roots) is edible.Flowers, fruit, and long tendril *Corresponding author.E-mail: mchamba@poly.ac.mw.
Author(s) agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License shoots and leaves are relished as vegetable.The leaves and tender young shoots are cooked as vegetables and used as potherb or added to soups and stews (Lim, 2012a).Pumpkin blossoms are edible raw or cooked but when mature, the fruit is cooked as a main course or side dish, and used as an ingredient, in pies, soups, stews, and bakery preparations (Lim, 2012a;Durante et al., 2014).Seeds are eaten raw, dried or roasted and can be served as a snack but can also be ground into a powder and used with cereals and in bread making (Lim, 2012a).
This has made pumpkin a plant of interest for researchers.Although a lot of studies have been conducted on the pumpkin plant, most of them, however, have targeted the fruit and seeds only (Stevenson et al., 2007;Mala et al., 2016).Pumpkin leaf is one of the most consumed parts of the plant in some parts of the world, including the Sub-Saharan Africa (Lymo et al., 1991;Lim, 2012a), but limited information on the same is available.
Vegetables play crucial roles in alleviating hunger and food security by contributing bulk of the nutritional components in the diets of people where animal products are scarce (Mepba et al., 2007).Just like many other green leafy vegetables, pumpkin leaf is tasty and nutritious and is popular in countries such as Kenya, Malawi, Zambia, Zimbabwe, among others.The leaves are a valuable source of nutrients (which are usually in short supply in daily diets) especially in rural areas (Lymo et al., 1991;Mepba et al., 2007) where they contribute substantially to minerals, fiber, protein and vitamins, especially β-carotene and ascorbic acid (FAO,1988;Mwaniki et al., 1999;Adegunwa et al., 2011;Kakade and Neeha, 2014).Pharmacologically, leaves in the family of Cucurbitaceae are believed to have a number of health benefits.In ethno medicine users, it has been reported that the leaves are used for reduction of fever, treatment of nausea and boosting haemoglobin content.It is also believed that the leaves help in the prevention of convulsion, in which young leaves are sliced and mixed with coconut water and salt, then stored and used for the treatment.Moreover, the leaves have been reported to have the ability of boosting fertility as a result of zinc and essential fatty acids present, protect the liver and cure anaemia (due to the presence of iron).Furthermore, the sufficient amount of vitamin is reported to help in boosting of vision as well as supplementing effect on the daily protein requirement of the body (Dhiman et al., 2012;Lim, 2012a, b).A study by Kwak and Ju (2013), in-vitro, has shown the anti-cancer propertied of extracts from C. moschata leaves.
In most cases, pumpkin plant is mainly grown for the fruit, as such, availability of the leaf as vegetable depends on the time of the year when the fruit is considered to do well.Although pumpkins are not necessarily seasonal in nature, in countries like Malawi, they are mostly grown during the rainy season; as a result, the leaves are in abundance during this period and become scarce thereafter.To ensure their constant availability, in most developing countries, pumpkin leaves are traditionally processed into their dehydrated form, which mostly involves blanching and sun-dehydration (Lymo et al., 1991;Mepba et al., 2007).Although this is done traditionally, the scientific reasoning is that the blanching deactivates enzymes, while dehydration reduces water activity, which prevents growth of moulds and other microorganisms; thereby preventing spoilage of the preserved vegetable (Fellow, 2009).
In fact, there are several methods of processing vegetables for preservation, which include sundehydration, canning, vacuum packing, minimal processing, refrigeration, freezing and irradiation (Fellow, 2009).Of all these processing methods, sun-dehydration has been regarded as the most effective, cheap and popular method of processing pumpkin leaves for preservation by the local people in Malawi compared to the other methods.Under this method, people prefer blanching then sun-dehydration compared to the one which only involves sun-dehydration without blanching with the same reasoning that blanching will help to deactivate spoilage enzymes and kill spoilage microorganisms hence help the vegetable to be of higher quality.While fresh pumpkin leaves are perishable due to high water activity, dehydrated pumpkin leaves stay longer as compared to the fresh ones.Dehydrated vegetables stored in good containers and kept in dry conditions can have a shelf life of more than a year (Musarirambi et al., 2010).Dehydration also reduces weight of the vegetable thereby making it easy for transportation (Fellow, 2009).
Although this kind of processing has been regarded as significant in preserving pumpkin leaves, the steps of blanching and sun-dehydration have great potential of reducing some nutrients in the vegetable, more especially, ascorbic acid (vitamin C), which is soluble in water and prone to oxidation upon exposure to light (Lawal et al., 2015;Okpalamma et al., 2013;Adegunwa et al., 2011).Despite the fact that, there is a possibility that the blanching and dehydration can lead to ascorbic acid loss, there is no clear information on the amount that is lost as a result of this processing method.Lack of knowledge on the retained amount might interfere with the formulation of balanced diets hence a need for a study of this nature that focuses on the effect of cooking time and/or dehydration condition on ascorbic acid loss and suggest how best the vegetable can be processed to ensure its possible maximum retention.
Sample collection
Fresh and tender pumpkin leaves of the C. moschata species (over 2 kg) were purchased in morning hours (around 8:00 am) from a single seller at a local market.The leaves were immediately brought in airtight plastic carrier bags to a laboratory, which was about 3 km from the market, for processing.Thus the vegetables arrived the laboratory still very fresh.While analysing initial contents of ascorbic acid and moisture in the fresh unprocessed portion of the sample, the rest of the vegetables were stored in the same carrier bag in a refrigerator and were processed within 4 h.This was done to minimise action of spoilage microorganisms and enzymes that could alter characteristics of the vegetables.All chemicals used were of analytical grade.
Sample preparation
The pumpkin leaves were first cut into slices of about 1 cm in width using a stainless steel knife then divided into 4 portions of 400 g each.One portion was analysed immediately for moisture and ascorbic acid contents.The remaining 3 portions were immersed into separate beakers of 500 mL pre-boiling distilled water and boiled further for 5, 10 and 15 min, respectively.At the end of the boiling time, each sample was drained in a polypropylene colander until the liquid stopped dripping.Each sample was then divided further into two equal portions.Each portion of these blanched samples was spread on a separate clean traditional bamboo winnower and one portion dehydrated under shade at ambient temperature of about 22 to 26°C, while the other one under direct sunlight (32±4°C).The dehydration process was observed for 5 days and the leaves were removed and put into air-tight polythene bags and kept at dry, well ventilated place, ambient temperature (22 to 26°C) until analysis.The samples were named according to the blanching time and method of dehydrations for example; '5 min SD' was a sample which was blanched for 5 min and sundehydrated, while the '5 min SHD' represents one that was blanched for the same 5 min but was shade-dehydrated.
Moisture retention determination
Moisture content was determined using the AOAC (2000) method.
Crushed sample (2 g) was put into beaker, which was previously cleaned, dehydrated for 1 h in an oven and cooled in a desiccator for 30 min.The initial weight of the beaker with sample was recorded.The sample in the beaker was then dehydrated for 6 h in an air circulating oven set at 100°C, cooled in a desiccator for 1 h, then reweighed.Moisture content was calculated as a percentage using the following formula: Moisture content (%) = (A -B) / C × 100 Where: A = initial weight of beaker with sample; B = final weight of the beaker plus sample after oven-drying; C = initial weight of the sample before oven-drying.Moisture retention was calculated as a percentage of the dehydrated sample moisture content to that of original fresh leaves.
Ascorbic acid retention determination
Ascorbic acid (AA) was analysed by AOAC (2000) titrimetric method using 2,6-dichlorophenolindophenol (DCPIP) as a redox dye.To begin with, 30 g of dehydrated sample was ground finely using a motor and pestle to pass through a 100 mesh sieve.Then, 90 mL of water was added to make a ratio of 1:3, and the mixture transferred into a 200 mL beaker.Two spatulas of activated charcoal were added and the mixture was boiled for 10 min to remove the green colour that would interfere with the observation of colour change during titration.After cooling in a water bath, the sample was filtered through a Whatman No. 1 filter paper.The filtrate, in triplicate, was then used for the analysis of ascorbic acid content in the sample using the above AOAC standard method.Ascorbic acid retention was calculated as a percentage of dehydrated sample ascorbic acid content to that of original fresh leaves.
Statistical analyses
One way analysis of variance (ANOVA), with Duncan's multiple range test, using a SAS program (version 8.1, SAS Institute Inc., Cary, NC, USA) was conducted to assess significance of differences (p<0.05)among the obtained mean values.
RESULTS AND DISCUSSION
The results for both moisture content and ascorbic acid determinations and retentions of the differently processed dehydrated pumpkin leaf samples are presented in Table 1.
Moisture retention
In the preservation of vegetables by dehydration technique, moisture content of the final product is of great importance as it determines its longevity on the shelf.Usually, dehydration under direct sunlight is preferred as it is believed to reduce the moisture to the minimum level.However, from Table 1, it can be observed that there were no significant differences (p>0.05)among samples dehydrated under direct sunlight and those dehydrated under a shade.Much as the direct sunlight might be efficient in terms of the dehydration time, it has great potential to affect retention of some nutrients such as vitamin C and being a free provision, the sun's efficiency has no any economic value.As such, dehydration techniques that can retain nutrients would be of great importance.Compared to the findings of similar studies, the moisture contents in the fresh and processed dehydrated leaves were substantially lower than those reported by Onoja (2014) in fluted pumpkin (Telfairia occidentalis) leaves.This difference in the moisture content of the fresh leaves may be attributed to differences in plant species and water composition of the area where the plants for these two studies were grown, while those of dehydrated leaves may be due to the length of dehydration time, ambient temperature and air circulation of the dehydration environment.
A thorough scrutiny of the results in this study further revealed that samples dehydrated under the direct sunlight retained less moisture compared to the shade dehydrated ones, but the contents increased as blanching time increased.Thus, the sample, which was blanched for 15 min retained more moisture followed by the ones blanched for 10 then 5 min, in that order.This scenario may not necessarily reflect that the blanching process led to absorption of more water by the sample.This is so because, observation has shown that as leafy vegetables get boiled, they tend to shrink and liquid gets released from them resulting in an increase in the amount of liquid in the boiling vessel.However, there is a possibility that as the liquid got released from the leaf, external cells of the leaf got compacted together to form a semi-permeable membrane that prevented some water from getting out of the leaf.The exposure to the direct sun radiation possibly assisted in the faster formation of this membrane compared to shade-dehydration, where the scenario was different.The shade-dehydrated samples had their moisture retention decreasing with increasing blanching time.These different trends are clearly presented in Figure 1.No study, however, was found to compare these findings with.
Ascorbic acid retention
The results of the ascorbic acid retention in the processed pumpkin leaves are also presented in Table 1.There was a significant difference (p<0.05) between the fresh and processed samples, except the 5 min SHD, with the fresh samples having the highest amount.Among the processed samples, all of them differed significantly (p<0.05) in the order of 5 min SHD > 10 min SHD > 15 min SHD > 5 min SD > 10 min SD > 15 min SD.This showed that the samples, which were dehydrated under a shade, had generally higher values than the sun-dehydrated ones.At the same time, ascorbic acid kept reducing as blanching time increased.The ascorbic acid in the 5 min shade-dehydrated sample did not differ significantly with the fresh one.However, with the scope of this study, it was not certain as to whether the 5 min of blanching were enough to achieve green colour retention, deactivation of microorganisms and enzymes, and improvement of flavour, which are the main reasons for blanching vegetables (Kakade and Neeha, 2014;Ahmed et al., 2001).The study conducted by Vyankatrao (2014) in mint, coriander, curry leaves and bitter gourd revealed that highest retention of ascorbic acid alternated between sun-dehydrated and shadedehydrated among different vegetables, indicating that the findings of this study were specific to leaves of C. moschata and cannot be easily generalised to all leaves that would be dehydrated under the same conditions.Type of the leaf is also an important factor.However, a number of studies in drumstick (Molinga) leaves (Joshi and Mehta, 2010) are in concord with the findings of this study.
Conclusion
The findings of this study have shown that in the preservation of C. moschata leaf vegetables by dehydration method, duration of branching and light intensity during dehydration have an effect on the retention of vitamin C. Reduced blanching duration accompanied by shade dehydration can retain more of the vitamin.Proper shade dehydration of the vegetable cannot compromise the shelf-life of the vegetable as the retained moisture may not be different from that of the vegetables dehydrated under direct sunlight.
Figure 1 .
Figure 1.Trends in the retention of moisture by pumpkin (C.moschata) leaves blanched for varied times and dehydrated under direct sunlight and shade.
Table 1 .
Moisture and ascorbic acid retentions in differently processed dehydrated pumpkin (Cucurbita moschata) leaves.Sample name represents method of preparation: The first part stands for blanching time in minutes and the last upper case letters represents dehydration condition (SHD = shade dehydrated, SD = sun-dehydrated).Different superscript letters indicate significant difference among mean values of triplicate tests p<0.05. * | 2018-12-11T18:10:55.588Z | 2017-08-31T00:00:00.000 | {
"year": 2017,
"sha1": "b9101b290f48b35a4798590eb086d88726fcadf9",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJFS/article-full-text-pdf/4A39B6165185.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b9101b290f48b35a4798590eb086d88726fcadf9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"History"
]
} |
255942031 | pes2o/s2orc | v3-fos-license | Desbordante: from benchmarking suite to high-performance science-intensive data profiler (preprint)
Pioneering data profiling systems such as Metanome and OpenClean brought public attention to science-intensive data profiling. This type of profiling aims to extract complex patterns (primitives) such as functional dependencies, data constraints, association rules, and others. However, these tools are research prototypes rather than production-ready systems. The following work presents Desbordante - a high-performance science-intensive data profiler with open source code. Unlike similar systems, it is built with emphasis on industrial application in a multi-user environment. It is efficient, resilient to crashes, and scalable. Its efficiency is ensured by implementing discovery algorithms in C++, resilience is achieved by extensive use of containerization, and scalability is based on replication of containers. Desbordante aims to open industrial-grade primitive discovery to a broader public, focusing on domain experts who are not IT professionals. Aside from the discovery of various primitives, Desbordante offers primitive validation, which not only reports whether a given instance of primitive holds or not, but also points out what prevents it from holding via the use of special screens. Next, Desbordante supports pipelines - ready-to-use functionality implemented using the discovered primitives, for example, typo detection. We provide built-in pipelines, and the users can construct their own via provided Python bindings. Unlike other profilers, Desbordante works not only with tabular data, but with graph and transactional data as well. In this paper, we present Desbordante, the vision behind it and its use-cases. To provide a more in-depth perspective, we discuss its current state, architecture, and design decisions it is built on. Additionally, we outline our future plans.
INTRODUCTION
According to [2], data profiling is the "set of activities and processes to determine the metadata about a given dataset". Such metadata can be useful for dataset exploration, various tasks related to data quality, database management and database reverse engineering. It also has many applications [3,20] in data integration and query optimization domains.
Data profiling can be divided into naive and science-intensive. Naive profiling concerns extraction of such simple facts as number of rows and columns in a table, minimum and maximum values in a column, detection of atomic column data types and so on. There are hundreds of data profiling tools that belong to this class, as almost all big information system vendors offer them. Many open-source tools exist as well, and one of the most prominent ones is Pandas Profiling [11].
On the other hand, science-intensive profilers focus on extraction of complex metadata, which requires sophisticated algorithms. Examples of such metadata discovery are the following: extraction of all kinds of functional dependencies from tables, both exact and relaxed [12], association rule mining [4], detection of semantic column types [19], discovery of data constraints [6,37] and many more. Tools that offer such profiling are much rarer. Two significant systems that offer such functionality are Metanome [26] and OpenClean [24].
In this paper, we present Desbordante (Spanish for boundless) -a high-performance science-intensive data profiler. It is inspired by Metanome, but differs from it in various key points. First of all, the focus of Desbordante is on industrial applications in a multi-user environment. Extracting complex metadata requires sophisticated algorithms which are very computationally expensive and crash-prone. Desbordante addresses these issues by taking a mindful approach to algorithm implementation and via specifically-designed application architecture. Second, we have a different vision of use-cases for such a tool. We envision our primary users as domain experts who possess a large amount of data that they would like to explore, and at the same time, they are not necessarily IT professionals. Our users wish to discover various patterns in their data which state a nontrivial fact. These patterns are formally described using a variety of the so-called primitives. By itself, a primitive is a description of a rule that holds over the data (or a part of it), described formally, by mathematical methods. Functional dependencies can be considered as a well-known example.
The kinds of experts who could be interested in pattern discovery are: (1) Bioinformatics researchers, chemists, geologists, and in fact almost any scientist working with large amounts of data, especially those obtained experimentally. (2) People working with financial data: financial analysts, salespeople, traders, who all also have a lot of data at hand that can be explored. (3) Data scientists, data analysts, machine learning specialists.
For scientists that work with large amounts of data, finding a primitive indicates the presence of some pattern. Based on it, they may be able to formulate a hypothesis or even draw conclusions immediately (if there is enough data). At the very least, the found pattern can give them a direction for further study. For example, the bioinformatics group of the Saint-Petersburg JetBrains lab used such primitives in their work [35].
In the case of financial data, the researcher can also try to obtain some kind of hypothesis (for example, "out of all cars offered by a competing company, red ones are the best selling"). However, there are more mundane and more in-demand applications: cleaning errors in data, finding and removing inexact duplicates, and many more. Note that scientists might also be interested in this functionality, albeit to a much lesser extent.
As for machine learning, the found primitives can help in feature engineering and choosing the direction for the ablation study.
The aforementioned use-cases lead to the following specific requirements for our primitive discovery tool: • Focus on approximate primitives. Our users work with real data, therefore implementing approximate primitives should be of priority. • Focus not only on primitive discovery, but also on primitive validation and explainability of results. Our users need to be provided with information why a particular instance of a primitive does not hold. • Focus on tunability. Our users wish to fine-tune discovery process by specifying various constraints on sought-after primitives. • Focus on non-tabular data. While most popular data type are tables, our users are also interested in other types such as graphs and transactional data. • Focus on supporting multiple interfaces. Some of our users need console application, some a rich web UI, and some need a Python interface.
The academic database community has created a large amount of primitives describing many different patterns that may be present in data. For example, there are more than thirty different formulations only for the class of relaxed functional dependencies [12]. And every year novel primitives continue to appear. However, these primitives are largely unknown to people outside of the community. In the worst case they simply remain a theoretical result, and at best they exist in the form of a littleknown prototype which is not ready for industrial application. There are several tools [24,26] that contain collections of algorithms that discover and validate primitives but they are not production-ready, too. They are performance-bound and fail to cater to specific needs of our users by lacking required functionality.
Therefore, the idea of Desbordante is to "open" these primitives to the general public and give everyone the opportunity to study their data. Now, turning to the Desbordante itself: • Industrial focus is expressed through efficiency and resilience. Unlike other science-intensive profilers the kernel of Desbordante is implemented in C++. • Desbordante comes with a console and a web version. Also, Python interface is provided. • It supports pipelines -ready-to-use functionality implemented using the discovered primitives, e.g, typo detection. Users can construct their own pipelines via python bindings. • It supports tabular and non-tabular data (graph, transactional).
Desbordante is the open-source 1 project implemented using the modern tech stack. You can try the deployed demo here 2 .
The main contribution of this paper is the description of the tool vision, architecture, and approaches we took to satisfy user needs and ensure its high performance, resilience, and scalability. We also discuss tool positioning, list currently supported primitives, and present future plans.
The paper is organized as follows. In Section 2 we present related work and discuss existing profiling tools. Next, in Section 3 we describe primitives and requirements specific to their discovery. In Section 4 we examine Desbordante from a 10K feet view and below by describing project's vision, core functionality, considerations regarding user experience and system's performance. In Section 5 we sketch a system architecture of Desbordante, technology stack, hows and whys of implemented microservices. In Section 6 we list our future plans and milestones to achieve.
RELATED WORK
It is hard to classify existing tools in such a way that they would exactly match with Desbordante by properties, functionality, or vision. Therefore, we review most well-known tools which implement various data profiling techniques and can mine primitives or at least highly rely on them.
The closest relative to Desbordante in the data profilers family is Metanome [26]. In a nutshell, Metanome is a framework which provides developers with an infrastructure and a corresponding set of interfaces for implementing and benchmarking primitives. Metanome's architecture makes the process of developing and testing research ideas as fast and convenient as it can be, thus enabling developers to concentrate on the algorithms instead of boilerplate code for database connectors, text processing utilities, etc. Metanome was developed by the Hasso Plattner Institute group and almost any algorithm developed by the same group can be plugged into Metanome as a JAR file (the concept of an algorithm as connectable compiled code is what Desbordante currently lacks). However, Metanome can not be considered as a true industrial alternative to Desbordante due to the reasons we present in an extensive evaluation and comparison of both platforms [34]: larger memory footprint, inferior dependency discovery speed, just to name a few. Still, Metanome system is an inspiration for Desbordante as for a user-friendly, highperformance and flexible data profiler.
The goal of the OpenClean [24] system is to become a part of a modern data science stack by taking a niche of data cleaning and profiling. Being an open-source Python library, OpenClean provides its users with an environment where they can seamlessly integrate data profiling with other frameworks and libraries for data processing and machine learning. The concept of FD is used in two ways: checking data for FD violations and FD mining. The former is implemented in Python as a combination of mapper and group filters (much like the SELECT. . . GROUP BY . . . HAVING idiom for FD checking in SQL). For records which violate the FD, a repair process can be started via OpenClean repair strategies or user-defined ones. The primitive mining functionality is provided by a standalone package which initiates a subprocess for running Metanome JAR files. Basically, any algorithm that was once implemented for Metanome, can be run within OpenClean. It also means that in terms of algorithm performance OpenClean inherits all problems of Metanome.
Unlike OpenClean, most data cleaning tools have no built-in functionality for primitive mining, and expect a user to provide primitives as an input data to the cleaning process [7,13,29,30]. For example, a data repairing framework HoloClean [29] makes good use of denial constraints (DC), which subsumes FD, CFD and matching FD concepts. The input of HoloClean is an inconsistent dataset, a set of DCs and any external knowledge which can be used to repair the dataset. HoloClean combines every piece of available information and proposes solutions that can bring the data to a consistent state. Since the main goal of HoloClean is to pave a road to a careful restoration of a consistent form of a dataset, the tool does not implement any internal mechanisms for mining of primitives or for efficient in-place inference of metadata. The same vision is shared by Horizon [30], which computes a FD pattern graph based on the FD set provided, and constructs a solution via pattern graph static analysis.
However, not every data profiling tool considers primitives as a great deal for a cleaning process or error detection task. Otherwise, they rely on machine learning, probabilistic methods [36], or a curated knowledge base like Katara does [14]. The authors of Katara even refuse to consider FDs as trusted metadata, since this type of primitive can not guarantee that data would be fixed in a non-ambiguous way.
Some data engineering tools follow a different philosophy: instead of fixing data, they make sure it is tidy in the first place. Great Expectations [1] allows its user to define complex integrity constraints which are used as an assertion mechanism while the data flows through ETL processes. Such tiny unit tests for data validation can be embedded into a workflow and immediately raise a flag if anything unexpected happens, e.g., newly arrived data violates a primitive that was described within the Great Expectations framework. A similar idea is used in the Auto-Validate [32] system.
The aforementioned tools are implementations of research findings which are carefully surveyed in dozens of papers, and some of these tools are open-source and free to use. To make the overview complete, we would like to list some of industrial solutions.
Most commercial solutions provide support for the concept of FD as a part of data profiling: SAP Information Steward, Oracle Warehouse Builder, Informatica Data Quality, Microsoft SQL Server Data Profiling Task, and Talend Open Studio can return functional dependencies which almost hold on data, or verify whether a user-specified dependency holds. For each AFD, these tools also maintain the fraction of records which violate a dependency. It seems that this way of processing FD/AFD is almost a must-have feature for any data profiling tool and it comes handy when performing anomaly detection and exploring "broken" records. However, usually FD/AFD are the only types of primitives that are implemented in a pay-to-use tool, since nowadays their focus has shifted to the machine learning side of data profiling spectrum.
DISCOVERY OF SCIENCE-INTENSIVE PRIMITIVES 3.1 Current State and Motivation
By itself, a primitive is some description of a rule (a pattern) that holds over the data, described mathematically. Functional dependencies can be a good example: dependency A −→ B (A and B being columns) holds if for each pair of rows it is true that from the equality of values in A follows the equality of values in B.
There are several hundred of types of primitives [12,33], and each of them has well-established properties and a sound theory behind it. New types are developed all the time, too.
However, as stated in the Introduction, they largely stay within the database and associated communities, they provide no benefit to broader public.
The reasons for this are the following: • Largely, implementations of primitives are poorly accessible or not available at all: -The majority of them was developed in the pre-Github era when it was not customary for authors to provide source code or the source code was published on research group's web site, which is usually long-dead now. Either way, currently there is no source code available. -These that are accessible now are scattered around the Web, on personal web sites or in obscure repositories. Of course, the presenting paper usually includes a link, but prospective users have to know about the primitive and the paper first, which is not the case. • If they are available, they are hard to set up and run. For example, the newcomers of our team took from 6 to 12 work hours to set up and run Metanome. And at the same time they are mostly computer science students who familiar with IT specifics. Thus, for a non-IT specialist who would like to try some primitive, it will be a rather tedious task. • Each available implementation of a primitive (or even discovery algorithm) would require its own software ecosystem to set up and maintain. It is an another obstacle to overcome for a prospective non-technical user who would like to try some primitives. • Finally, available implementations are usually proofs of concept or prototypes which were made for some paper and were abandoned later. Therefore, they are usually not very efficient since they were developed in language which favours rapid prototyping, like Java or Python. These languages lack efficiency and low-level tunability of C++.
There are also scalability issues in a sense that real world datasets are likely to be larger than those benchmarked by paper authors. Moreover, these implementations may crash when processing a dataset which was not benchmarked by its authors. Thus, it is necessary to shift the limits of applicability further (since the name -Desbordante).
There are platforms which try to address these issues, such as Metanome or OpenClean. However, they fail to address all these issues at once. Thus, there is a need for an industrial-grade platform which will open primitive discovery to the broader public and Desbordante tries to achieve this goal.
Specifics of science-intensive primitive discovery
Discovery of science-intensive primitives has its own specifics, which can be described by several aspects, divided into two groups. The ones belonging to first group are inherent to all science-intensive profilers and stem from the nature of primitive discovery task.
(1) Primitive discovery is a computationally hard problem. Discovery algorithms run into time or memory limits even for small datasets. Consider, for example, Table 1 from [28] where one can see sizes of datasets that can be mined for functional dependencies using server-class hardware. All datasets except two are smaller than one megabyte. The situation is similar in case of other primitives. Therefore, in order to make the discovery of primitives truly usable, we need to address these limitations. (2) Implementations of discovery algorithms are very complex, frequently depend on third-party libraries, and in general they solve a task belonging to the forefront of science. Therefore, they are fragile -they can crash or freeze on some inputs. Therefore, when "industrializing" them, one has to improve reliability of the application by making it fault-tolerant.
The second group describes aspects which are specific to the vision and goals of our system. These reflect use-cases and needs of our users.
(1) Our users are more interested in approximate primitives.
Real world data is likely to have all kinds of errors, missing values, and other types of artifacts. Therefore, exact versions of primitives are not applicable, they will be rarely found in real data. Instead, developing our profiler, we must provide inexact versions, which will allow some degree of error.
(2) Our users need not only discovery of primitives, but also their validation. Unlike discovery, validation accepts a specific instance of a primitive (e.g. a specific functional dependency) as input and returns whether it holds or not. This leads to the need for special screens in which the user can analyze the data and see what prevents a given primitive from holding (e.g. conflicting values, rows, etc.). (3) Our users need to be provided with various tuning knobs that will govern the discovery process. For example, concerning the discovery of functional dependencies, it is well-known that dependencies with a larger left-hand side are less valuable. Their discovery usually does not indicate the presence of a real dependency, but instead points to the fact that the data segment which was used for mining is too small to contain a counterexample. Primitive discovery process is always costly and it is worthwhile to skip unnecessary computations. Another important example is setting the error threshold for approximate primitives. At the same time, correct values depend on the particular dataset and user goals. (4) Our users have different preferences regarding the interface to use. Some of them prefer an old-school commandline interface, while others ask for a rich web UI. Furthermore, in order to open primitives to data scientists it is essential to provide a Python interface.
Desbordante aims to take into account these specifics.
DESBORDANTE 4.1 Overview
The core of Desbordante is a C++ library containing all auxiliary data structures needed for primitive discovery algorithms, the algorithms themselves and all required surrounding infrastructure. The library provides an API for executing the algorithms and obtaining their results which is used by the back-end of the web application. There is also additional library version with python bindings,so that all Desbordante features can be used • Discovery: find all holding instances of specified primitive over specified dataset; • Validation: given a primitive instance determine whether it holds over specified dataset, provide additional information about what prevents it from holding otherwise. The usage workflow is generally the following: (1) Select the primitive and desired algorithm for its discovery or validation; (2) Specify a dataset to work on and the required parameters; (3) Execute the algorithm and retrieve the results; (4) Filter and sort the results as needed. These steps are clearly separated in the web application, while in the console version all the required information for the algorithm (steps 1-2) can be set directly via CLI parameters. Since the web application was designed to be used by non-IT professionals in the first place, it should provide quality of life features besides its main functionality. Examples of such features are a viewable snippet of the selected dataset, a user-friendly interface with extensive usage examples, ready-to-use pipelines, and a progress bar which shows the current execution status of a task. Desbordante itself and the web application have a set of the built-in datasets. They mainly serve two purposes. First, help users to understand what possible data insights they can get using specific primitives. Second, make newbies familiar with the workflow of the tool. In addition to built-in datasets, the web application allows the user to upload their own. These datasets constitute user's personal library which shares the same workflow as the built-in ones.
User-facing aspects
The first feature that we would like to present is the tunability of primitive discovery and validation processes. Each primitive discovery or validation algorithm has a set of various options. These options can be divided in two groups: general and primitivespecific parameters. General parameters are the properties which need to be set up for any discovery task: dataset, its delimiter, and the Boolean switch which indicates whether the dataset has a header. Then, there are primitive-specific parameters, the first and most important of which is the algorithm. For some primitives, the best performing algorithm is more or less known, but for some it is not. Moreover, a discovery algorithm may perform badly on a "wide" or a "long" table. It may also crash due to the specifics of a particular dataset since different algorithms are built upon different principles and, for example, may require too much memory. Therefore, we have decided to provide several available algorithms and in some cases all of them.
Each of these algorithms has its own set of supported parameters. First of all, they are what to validate or to look for, some filters on the primitive instance. Then, if an algorithm supports discovery (or validation) of an approximate primitive, then the degree of allowed violations. Next, for discovery algorithms it is useful in some way to limit the "depth" of search. Discovery process is time-consuming and, at the same time, all instances are not always needed. Finally, a user can specify the number of threads which will be used for primitive discovery or validation if the selected algorithm supports multi-threading.
The next important feature are custom screens for the primitive validation task. This task reports whether a specific primitive holds or not. However, if it does not hold, users need explanations and answers to questions such as "how much is lacking?" and "what prevents it from holding?". It is essential to provide such information since it is an important knowledge about data being explored. It can indicate errors in data and point out "problematic" records. Therefore, there is a need for a screen that will provide this information.
An example of such a screen for the console version of metric functional dependency [22] validation is presented in Figure 2. It shows clusters of records that share the same left hand side, but differ in the right one. The "x" marks records which are too far from the rest in terms of their right hand side. Their distance to any of points from the same cluster is larger than the specified one and therefore they are good candidates to be outliers. The user can sort clusters and records within clusters using various parameters such as distance, index, number of outliers and so on.
Primitive discovery task also implements result screens with rich interaction tools that allow sorting using various parameters, filtering with regular expressions, and so on.
Pipelines
Aside from primitive discovery and primitive validation tasks, Desbordante offers pipelines. A pipeline is a set of ready-to-use functionality implemented using discovered primitives, which benefits a non-expert end user. While discovered primitive instances are useful by themselves, we believe that it is important to demonstrate what can be done using them.
There are two types of pipelines in Desbordante: built-in and custom. The first ones are present in the web application and have a rich interface. As a demo, we have included a typo detection pipeline in the deployed version. Its idea is as follows: (1) Find functional dependencies which almost hold, i.e. approximate dependency [23] holds, but not the exact one. Present them to the user for inspection. (2) Then, for the dependency selected by the user, present its clusters -row groups with the same left hand side and different values in the right hand side. These are the sets of rows which prevent exact FD from holding. In this screen, user can inspect the differences in the right hand side and decide whether the there is a typo or not. Having resolved the conflict for a particular cluster, a user can reupload the new version of dataset and continue data cleaning. (3) In order to reduce the number of presented clusters, the user interface contains several parameters that enable cluster filtering. A threshold for dependency to be considered as "almost" holding can also be set.
Note that the CLI does not provide built-in pipelines, since they require extensive interactivity.
While built-in pipelines are useful as a demo, they require significant effort to implement. As we are limited in resources, we put only the most useful scenarios on the web version. At the same time, we would like to allow users to experiment and build their own pipelines. For this, we provide an ability to build custom pipelines.
Contemporary data scientists use Python, and therefore it is essential to enable calling primitive discovery and validation tasks from Python programs. For this, we employed the pybind11 [21] library to provide the necessary operators and data structures. Using these bindings, our users can call Desbordante algorithms to experiment and construct their own pipelines. We plan to add popular ones to the web version and develop a user-friendly interface for them.
Performance
Unlike all existing open-source solutions, the discovery part of Desbordante is fully implemented in modern C++. While popular languages such as Python and Java are relatively simpler and thus offer a fast development process, they possess a number of no less prominent drawbacks: (1) Given equal effort put into code, the resulting performance of Java/Python applications is worse than that of C++, on average. (2) Java application performance can be unpredictable. Since explicit memory management is not possible in Java, programs rely on an automatic garbage collector, which may be invoked at any time. Therefore, run times may significantly differ even for consecutive invocations of singlethreaded programs. (3) Java programs usually leave a higher memory footprint than C++. (4) Finally, these languages restrict opportunities for low-level optimizations, such as vectorization via SIMD instructions. It is a critical drawback for solving a high-performance computing task.
To demonstrate the validity of our arguments we have experimentally compared Desbordante with Metanome [34]. For this, we have selected the Pyro algorithm [23] since it was one of the most promising primitives for the intended application scenarios. This algorithm discovers approximate functional dependencies.
The results are presented in Figure 3. The obtained improvement ranged from 1.19 to 3.43 times and was 2.12 on average. While the numbers are not really high, it is still an important result for such a computationally expensive problem. Another significant benefit is the reduction of memory consumption -the memory footprint of Desbordante is approximately two times lower than Metanome's. This is crucial since many primitive discovery algorithms are memory-bound [28]. Thus, reducing memory footprint enables the processing of larger datasets.
It is important to note that we have not exhausted the tuning potential of the C++ implementation. No sophisticated techniques were used (e.g., vectorization), no source code profiling was done, standard data structures and libraries were used, etc. Currently, Desbordante uses default C++ and Boost data structures, and we have not tuned their parameters. Desbordante does not rely on custom memory management libraries (allocators), but instead uses the C++ default. It is a well-known fact [25] that using a special allocator is a simple yet efficient way to improve performance of C++ programs. Therefore, it is possible to improve performance even more.
Finally, we must also discuss the approaches that rely on distribution of Java/Python code. Firstly, we believe that they will not improve the situation much. Primitive discovery problems are generally poorly scalable and naive approaches are not functional at all (e.g. see Fig. 1 in [31]). The reason for this is that it is necessary to pass over the whole (or a significant part of) dataset in order to get to the answer. Therefore, we believe that it is important to get the maximum performance out of single-node processing.
Secondly, since distribution approaches usually consist of a data shuffling scheme and some local algorithm, we can say that a distribution-based approach is not competing, but complementary to ours.
Supported primitives
Due to the reasons stated in Section 3 Desbordante possesses a slightly different set of primitives than Metanome. Currently, Desbordante supports discovery and validation for the following primitives: (1) Discovery of exact functional dependencies. We support all algorithms [27] that were implemented by the Metanome team, including HyFD [28] and the approximate algorithm AID-FD [8].
(2) Discovery of approximate functional dependencies, using the Pyro [23] and TANE [18] algorithms. (3) Discovery of conditional functional dependencies using the CTANE [15,16] algorithm and its variations. (4) Discovery of unary and n-ary Inclusion Dependencies using the Spider algortihm [5]. (5) Validation of metric dependencies [22] (only in the console version for now). (6) Discovery of fuzzy algebraic constraints [10] (only in the console version for now). (7) Discovery of association rules. This code was adapted from Christian Borgelt's 3 implementations [9] since it is efficient (used in the R package 4 ) and time-proven. Following his recommendations, we have selected only ECLAT, FP-Growth and Apriori algorithms. (8) Validation of Graph Dependencies [17] (only console version for now) (9) Naive profiling. In order to expand the userbase, we have also implemented naive profiling, which includes a number of simple statistics like min, max, number of missing values over column and so on. For now, we lack a significant number of primitives that Metanome has, such as UCCs, order dependencies and others. However, we have other types that are absent there (e.g. metric FDs), and which are more relevant for our use-cases. Also, to the best of our knowledge, for some of primitive types, e.g. graph dependencies, metric functional dependencies, and algebraic constraints our implementation is the only one publicly available. Finally, in the future we plan to greatly expand their number and catch up -some of the missing ones are already in the works.
ARCHITECTURE
In this section, we describe the architecture of the web application that we built around its core -Desbordante. Initially, Desbordante was a simple console application which used command line parameters and standard output as a user interface. In summer 2021 we have come to the vision described in the Introduction section and decided to provide it with a web interface.
Thinking about its implementation, we have formulated the following requirements for the web application: (1) Functionality. The system should be able to perform several user tasks in parallel. (2) Recoverability. The system should be able to recover itself in case of various unexpected errors. (3) Efficiency and manageability. The system should be able to limit computational resources which are given to a particular user or even a task. This requirement is crucial to prevent resource overuse and will also will allow resource scheduling. (4) Scalability. The architecture of the system should allow using several computing nodes for performing user tasks. Therefore, we have decided to use the microservice architecture, where each individual service performs a specific set of tasks. We have separate services for serving user requests, managing containers, executing tasks, database and task queue. Also, the microservice architecture goes well with containerization. The overall architecture showing connections between microservices is presented in Figure 4.
This approach also simplified dependency management as each of the services has every dependency pre-installed in its container. Thus, new versions of the application can be quickly redeployed to deliver new features to the users as soon as possible.
As the result, we have built a fault-tolerant application, which means that in case of one of the microservices failing, the application will continue its work and quickly restart the failed microservice. Our architecture is highly scalable, it can launch more task-executors if necessary.
Let us consider the architecture in detail.
(1) Serving webpages, frontend server directly interacts with the users. It is responsible for server-side rendering, a technique that moves calculations from the client browsers to the server. That speeds up page loading and makes user interface more responsible. (2) Node.js web server provides an API that allows to run tasks, send files and receive status updates. For each task on its arrival it creates entries both in the database and the queue. Web client periodically sends pings to the server, which replies with the progress info on the task. After task execution is finished, it sends back either the results or an error message (if a calculation error occurred or it required too many computational resources to complete). (3) In addition to Docker, we have our own container orchestrator that is used for executor containers management. The orchestrator also behaves as a consumer for the queue. For each task, it creates a new executor container and provides it with the task's payload. This service allows us to limit computational resources used by the executors. Additionally, in case of the executor failure it puts an error message into the database. (4) We use Kafka as a task queue.
That makes load balancing between multiple instances of the orchestrator service possible. (5) PostgreSQL serves as a DBMS in our project. It is used as a storage for the information about currently running or recently finished tasks, such as progress info, error messages and calculation results. Additionally it contains user profiles, session info and file metadata (6) The kernel of the app is its task executor utilizing Desbordante as a library. Its purpose is to run the specified algorithm on the provided data. On launch it acquires the data from the database and then starts the execution. While calculations are in progress, executor updates its task's status in the database. Finishing successfully or failing it returns calculation results or an error, respectively. Several executor containers can run at the same time, therefore making it possible to serve the users simultaneously.
The main set of the microservices is accompanied by the monitoring system. It utilizes two specific tools: Prometheus and Grafana. Prometheus accumulates metrics from the services, periodically collecting data from the specified endpoints. Grafana presents these data in the form of informative dashboards. It is also capable of notifying of events meeting the predefined requirements.
Collected data provides some insights about the system's health, resource consumption, and execution errors. It is crucial for addressing probable hardware and software issues.
FUTURE PLANS
Desbordante is currently being actively developed. There are two primary directions: improving user experience (in a broader sense) and adding new primitives. The first one includes the following tasks: • alternative approaches to data uploading: via external file link, database connectors or import of serialized data structures (e.g. pandas pickled dataframes or NumPy arrays); • export of results in most common data exchange formats; • web API tokens, so remote high-performing server can be used for discovery tasks; • data manipulation such as in-place table edits, column renaming or creation of new columns based on a user defined formula; • regexp search over dataset cells; • extending user and admin dashboard.
The second one concerns extending the pool of available primitives. There are two directions: first we plan to catch up with Metanome by adding missing primitives and at the same time we plan to continue to implement our vision and bring less known primitives to light. Near-term plans include implementing the following primitives: • matching dependencies • order dependencies • denial constraints • differential dependencies • unique column combination • various types of relaxed functional dependencies • graph dependency discovery • advanced dataset statistics Finally, we also plan to touch upon system aspects: • Devise smart result caching and checking result containment. • Implement stream processing and dynamic recalculation of primitives.
ACKNOWLEDGMENTS
We would like to thank Nikita Talalay and Bulat Biktagirov for their contribution to the project. We would also like to thank Anna Smirnova for her help with the preparation of this paper.
CONCLUSION
In this paper we have presented Desbordante -an open-source data profiler with the focus on discovery of science-intensive patterns in data. Desbordante aims to open industrial-grade primitive discovery to a broader public, focusing on domain experts who are not IT professionals. Unlike similar systems, it is built with emphasis on industrial application in a multi-user environment. It is efficient, resilient to crashes, and scalable. Its efficiency is ensured by implementing discovery algorithms in C++, resilience is achieved by extensive use of containerization, and scalability is based on replication of containers. | 2023-01-18T06:42:33.115Z | 2023-01-14T00:00:00.000 | {
"year": 2023,
"sha1": "adf814579b2dbcdb760d37b20bf83d2f0a00e226",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "adf814579b2dbcdb760d37b20bf83d2f0a00e226",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
235220597 | pes2o/s2orc | v3-fos-license | Moisture channels and pre-existing weather systems for East Asian rain belts
Rain belts in East Asia frequently pose threats to human societies and natural systems. Advances in a skillful forecast on heavy precipitation require a deeper understanding of the preconditioned environments and the hydrologic cycle. Here, we disentangle 15 dominant moisture channels along four corridors reaching the Somali Jet, South Asia, Bay of Bengal, and Pacific basin for the warm-season rain belts. Among them, the Somali and South Asian channels were underappreciated in the literature. The results also highlight the importance of terrestrial moisture sources, and the close relationship between the moisture pathways and rain belts’ characteristics. Back-tracing the weather within a 2-week lead time reveals the pre-existing weather systems and circumglobal wave trains, that govern the moisture channels. Findings from this work develop a better understanding of East Asian rain belts’ water cycle, and may offer insights into model evaluation and heavy rainfall prediction at a longer lead time.
INTRODUCTION
Being the most influential and iconic weather phenomenon during the East Asian summer monsoon (EASM) season, the eastwest elongated rain belts often trigger landslides and floods that greatly disrupt agriculture, natural systems, and human properties. These rain belts exhibit a northward migration on the intraseasonal time scale, which orchestrates distinct monsoon stages in various parts of East Asia, such as the Pre-Meiyu, Meiyu/ Baiu and mid-summer stages [1][2][3][4] . Intensive efforts have been devoted to study the immediate causes of the rain belt formation and distributions at the local scale 5,6 , such as the supergeostrophic lower-level jets 7,8 , strong horizontal shear lines 9 , sharp gradients of equivalent potential temperature 10 , and the upper-level westerlies 11,12 . There has also been growing attention towards the processes that cause extreme continental precipitation in East Asia on a 2-week time scale, such as non-local moisture sources, moisture transport, and the teleconnected weather circulations [13][14][15][16][17] and that in other places of the world [18][19][20][21] . We saw intensive endeavors to establish the source-receptor network to better understand the dominant sources for monsoon rainfall over continents 13,14,22,23 .
In terms of moisture transport, four main pathways have been widely recognized to supply moisture to summer precipitation in East Asia, including those from the Bay of Bengal, South China Sea, western North Pacific, and Eurasia 14,[24][25][26][27][28] . Early studies on major moisture pathways were conducted mainly through numerical experiments on moisture influx to the target region 25 or moisture budget diagnosis from the Eulerian perspective [26][27][28] . A recent study 24 revealed the main moisture channels that were similar to those documented in early studies using the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model. Yet, the results largely depended on the choice of prescribed source areas where trajectories were averaged. As multiple lines of evidence suggest that the upwind terrestrial sources and the remote Indian Ocean were important for summer precipitation in East Asia 13,15,23,29 , there may exist a discrepancy between the current understandings of the dominant moisture sources and the pathways. For this reason, we recognize the need for revisiting and updating the knowledge about moisture transport through objectively mining the important moisture supply channels that were overlooked in the past. Further, given that forecasting heavy precipitation with lead times beyond 5 days remains challenging for numerical weather models 30,31 , a deeper understanding of the preconditioned synoptic-scale environments 2 weeks before heavy rainfall would be beneficial for numerical forecasting at a longer lead time [32][33][34] . Hence, it is important to holistically depict the moisture pathways and the preconditioned weather systems that govern the hydrologic cycle for East Asian rain belts, which remains a research gap to be filled.
In this study, we perform backward tracking experiments on the moisture from the detected rain belts in East Asia during the warm season (April-September) using a semi-Lagrangian dynamical recycling model (DRM) 35,36 . We classify up to 15 moisture channels for all the events using a data-driven trajectory clustering method (see "Methods" section), and reveal the connections between the characteristics of rain belt events and the moisture channels. Based on the dominant moisture channels, we diagnose the key weather regimes that govern the moisture channels up to 2 weeks ahead, which would be useful sources of predictability for East Asian rain belts. This study will help fill the missing pieces of the atmospheric water cycle for the East Asian rain belts, and offer insights on the key weather patterns and teleconnections that may increase the lead time for heavy rainfall forecasts.
Four main moisture corridors
We begin by presenting 15 clusters of moisture channels, with the majority stacked along four main corridors for East Asian rain belts (Fig. 1a). These corridors include the Somali corridor (S1 and 2), the South Asian corridor (SA1-4), the Bay of Bengal corridor (B1-3) and the Pacific corridor (P1-3). The "least popular" corridor (i.e., the Somali corridor) accounts for~11% of the moisture trajectories tracked from the EASM rain belt events, while the "most popular" one (i.e., the South Asian corridor) covers~29% of the trajectories in the warm season. The four corridors altogether Other types of channels are labeled with an O. Numbers in the parentheses aside the color bar show the percentages of trajectories (~1 million) assigned to individual clusters. b Pie charts of different sources' contributions to rain belt events assigned to each moisture channel. Sources with contributions greater or equal to 6% are labeled with the acronyms, while the rest are grouped into the category "Others". The definition of acronyms can be found in the caption of Supplementary Fig. 1. The total attributed fractions of precipitation are shown in the subtitles. c Box plots of duration (unit: day), d total rainfall depth (unit: mm) and e rainfall intensity (unit: mm day −1 ) of rain belt events in each cluster. The boxplots are the traditional ones with the bars representing the maximum, 75th quantile, median, 25th quantile, and minimum. All boxplots are sorted in descending order of the mean (denoted by a yellow dot). f Heatmap of the arrival timings of rain belt events in each cluster in terms of probability. The number "401" in the x-axis denotes a dekad (i.e., 10 days) from April 1 to 10, and a similar convention applies to other values.
T.F. Cheng et al. cover~84% of the moisture trajectories that supply the warmseason rain belts in East Asia. While the Somali corridor was believed to affect East Asian summer rainfall in some of the previous studies 27,37 , the South Asian corridor was rarely documented but is crucial for supporting East Asian rain belts in the warm season, as suggested here.
Apart from the main corridors, there are secondary channels from the mid-latitudes, inland East Asia, and the South China Sea. Surprising as it may seem, the South China Sea channel (O3) only accounts for~6% of the moisture trajectories, suggesting that such a traditionally known moisture channel turns out to be minor for East Asian rain belts. This finding is consistent with our previous study that the South China Sea plays a secondary role in the warm-season precipitation in South China and the mid-lower Yangtze river basin 13 . It is worth mentioning that there is no cross-equatorial moisture channel from Australia as proposed by early studies 3,37 , implying that the direct moisture transport from the southern hemisphere might be trivial. This possibly results from the weak meridional flow across Maritime islands 27 , the deflection of airflows by the Coriolis force 38 and the ascendency of the southwest Indian monsoons and the western North Pacific subtropical high (WNPSH) 13,39 .
Among the four dominant corridors, the Somali corridor manages the longest moisture channels (i.e., S1 and 2) from the remote western Indian Ocean, where the Somali jet prevails in the warm season. As such, the Somali channels sequentially uptake water vapors from the western Indian Ocean, the Arabian Sea, the Bay of Bengal, Indochina and even Southwest China ( Fig. 1b(1)- (2)). Owing to a shorter pathway of the Bay of Bengal channels (B1-3), strong moisture uptake is mainly over the Bay of Bengal and Indochina (Fig. 1b(7)-(9)). The difference in the lengths of the Somali and the Bay of Bengal channels, as we will show later, reveals distinctive weather regimes at play (see "SAL-WNPSH coupling" and "Western extension of the WNPSH" sections). In contrast, the Pacific corridor carries abundant moisture from the Philippine Sea, Western North Pacific, and Northern South China Sea ( Fig. 1b(10)- (12)). The South Asian corridor, instead, primarily collects terrestrial moisture from the interior continent, including the Indian subcontinent, Southwest China, and Indochina ( Fig. 1b (3)-(6)). We find it intriguing that 7 out of the 15 channels rely more on terrestrial sources than the oceans ( Supplementary Fig. 2). This finding reveals the conspicuous role of upwind terrestrial sources in the downwind rain belts during the EASM season, as also supported by recent research 13,29,40,41 . It should be noted that around one-third of the rain belt precipitation is left unattributed in the DRM (Fig. 1b), possibly due to the fast recycling process and the inherent well-mixed constraint in the model 35,42 . That said, the leading sources mentioned above are likely to stay the same if the moisture attribution ratio becomes higher.
Moisture channels and rain belt characteristics
We see a quite remarkable disparity in the strength and persistence of rain belt events fed by different types of moisture channels. Statistically, long-range oceanic channels, including the S1, S2, B1, and P2 channels, are associated with relatively persistent rain belt events with a mean lifetime greater than 2 days (Fig. 1c). Those persistent events are also the ones that produce high total rainfall depths (i.e., the accumulated precipitation of a rain belt event throughout its lifetime) (Fig. 1d), posing a great hydrologic threat to East Asia. As will be shown later, the long-range channels from the Indian basin are driven by prominent and well-organized weather systems, which continuously steer strong monsoonal airflows or even the atmospheric rivers 16,17,21 to sustain the East Asian rain belts. In addition, the Pacific channels (e.g., P2 and P3) contribute water vapors to rain belt events that have a relatively high rainfall intensity (Fig. 1e), due to tropical cyclone activities in the western Pacific basin 13 (see "Tropical cyclones and eastward retreat of the WNPSH" section).
Conversely, those scattered and short-range channels from the mid-latitudes (O1), inland East Asia (O2), and the South China Sea (O3) are mostly associated with shorter and somewhat trivial rain belt events (Fig. 1c, d). It may be surprising to know that the South Asian channels fed by terrestrial sources are related to rain belt events with moderate duration, total rainfall depth, and intensity ( Fig. 1c-e). This finding, again, underscores the role of terrestrial moisture channels in sustaining East Asian rain belts.
We also note that the arrival time of the rain belt events fed by the same moisture corridor tends to synchronize. Specifically, the South Asian channels more actively partake in rain belt events from April to mid-May (Fig. 1f), when the Spring stage in East Asia prevails 2,43 . Following them, events fueled by the Bay of Bengal channels (B1-3) tend to appear in the Pre-Meiyu stage from early-May to early-June 1 . Progressing into the Meiyu season (mid-June to mid-July) 44 , we see stronger and more persistent rain belt events dominated by the Somali channels (S1 and 2). This is also the time when the Somali jets, the Indian summer monsoon and the EASM all peak in their strengths 45,46 . Events of the Pacific channels (P1-3), in contrast, are inclined to occur during the typhoon season (i.e., late summer). The unexpectedly clean cut in the arrival time of events of different moisture corridors reflects the prominent seasonality of both the rain belt's water cycle and the Asian weather system.
Governing weather systems and teleconnections
Given the pronounced seasonality of the moisture corridors, it is meaningful to explore the weather systems and teleconnections that set up the corridors in the first place. In the following sections, we unfold several pre-existing weather systems that lay the moisture channels 2 weeks ahead of the rain belt events based on weather composites. We organize the results around the key weather regimes that govern different moisture channels in similar periods of the season for ease of comparison and generalization of important weather systems. Different lead-time settings are chosen in composite maps to present the weather systems' evolution better. The anomaly fields shown in the composites are the mean deviations of daily fields from the 5-day-moving-mean daily climatology (1981-2018).
SAL-WNPSH coupling
One of the most interesting weather patterns involves two canonical circulations during the Asian summer monsoon season--the South Asian low (SAL) and the western North Pacific subtropical high (WNPSH). The former draws abundant monsoonal rainfall to South Asia, while the latter assists in the frontogenesis in East Asia 39,47 . The synergistic coupling of these two weather systems favors the long-range Somali channels and a Bay of Bengal channel. Specifically, alongside the S1 channel, we observe a strong SAL accompanied by enhanced Somali jets and the southwest Indian monsoons, expands to East Asia from day −11 to −3 ( Fig. 2a-c). Subsequent to the SAL's demise, the WNPSH strengthens and extends westward at 10-20°N (Fig. 2c, d), favoring southwesterly moisture fluxes and frontal convergence over the entire EASM domain. Such a SAL-WNPSH coupling opens a moisture highway by connecting the Somali jets, Indian, and East Asian southwesterly monsoons. We find similar coupled circulations that steer the S2 channel ( Fig. 2f-i), yet the difference comes from a northward-positioned SAL that causes S2 to the north of the S1 channel ( Fig. 2b, g), and thereby more northward-shifted rain belts (Fig. 2e, j). Noticeably, the B1 channel is steered by a weaker SAL-WNPSH coupling, while the SAL centers in the eastern Bay of Bengal and results in a shorter moisture track (Fig. 2k-n).
The interesting SAL-WNPSH coupling begs the question of how it forms in this way. It turns out that the 200-hPa divergent wind anomalies in the east of the SAL converge over the western North Pacific in all three channels' composites ( Supplementary Fig. 3a-c, e-g, and j-l). Such an upper-level convergence of airflows would promote the lower-level divergence, and partly explain the subsequent development of the WNPSH.
Notably, the three channels (i.e., S1, S2, and B1) tend to occur in a similar period from late-May to mid-July (Fig. 1f), implying that the observed SAL-WNPSH coupling shares a similar background climatology. As these channels are all related to strong and persistent rain belt events (Figs. 1c, d), the SAL-WNPSH coupling is arguably the key weather regime to initiate the long-range moisture channels, and the strong Pre-Meiyu and Meiyu rain belts in South China and the mid-lower Yangtze river basin (Fig. 2e, j, o).
Dual-anticyclone pattern
Another key coupling is a dual-anticyclone pattern consisting of an anomalous anticyclone in South Asia and the WNPSH, which commonly controls the terrestrial South Asian channels (SA1, 3, and 4) (Fig. 3). Regarding the SA1 channel, for example, we see Fig. 2 Composites of meteorological fields within 2 weeks ahead of the clustered events fed by the S1, S2, and B1 channels. a-d The S1 channel's composites of 850-hPa geopotential height anomalies (shading, unit: m), 200-hPa geopotential height anomalies (contour, unit: m) and vertically integrated vapor transport (IVT) anomalies (vector, unit: kg m −1 s −1 ) on day −11, −7, −3 and 0 with respect to the occurrence of rain belt events (95 events), with day 0 being the first day of the event. e A risk map of the S1's rain belt occurrence probabilities on day 0. f-j Composites and the risk map for the S2 channel (93 events). k-o for the B1 channel (78 events). The solid red line represents the regressed moisture channel, along which the black star denotes the regressed position of the moist air column corresponding to the lead time. Contours in red (blue) denote positive (negative) values. Black dots over the shading, thick solid contours and the vectors all indicate statistically significant values at the 0.05 level (Student's t-test).
T.F. Cheng et al. prominent easterly IVT anomalies over the northern Indian Ocean on day −7, which later accompany a high-pressure anomaly straddling over the Indian subcontinent and the adjacent seas (Fig. 3a, b). Such an anomalous anticyclone hinders the moisture transport from the Indian Ocean. Subsequent to the South Asian anticyclone, the WNPSH strengthens and steers moisture from the southwest (Fig. 3c, d). The interplay of the two anticyclones effectively blocks the moisture from the Indian Ocean, while maintaining the moisture advection from South Asian land. This observation also explains why the South Asian corridor strongly depends on terrestrial sources (Fig. 1b). We find slightly different strengths and positions of the dual anticyclones in other South Asian channels that explain the nuances in the pathways (Fig. 3f-h, k-n). A similar dual-anticyclone pattern was also identified in our previous work in which it directly correlated with contributions from South Asian land sources 13 .
The arrival time of the events gives clues about the timing of the weather pattern. Specifically, the SA4 channel tends to appear from late-June to July, while the other SA channels mainly occur in April and May (Fig. 1f). As such, the dual-anticyclone pattern in the early summer (when the western ridge of the WNPSH mainly resides in the South China Sea 47 ) steers the SA1 and 3 channels to induce rain belts to the south of the Yangtze river basin (Fig. 3e, j). In contrast, the dual-anticyclone pattern in the mid-summer consists of a northward-extended WNPSH 47 , leading to the SA4 channel that fuels the Baiu-Changma rain bands over the Korean Peninsula and South Japan (Fig. 3o). The finding of the South Asian corridor Tropical cyclones and eastward retreat of the WNPSH The Pacific channels' weather regimes bear some similarities in which an anomalous cyclone emerges over the western Pacific a week before the rain belt events (Fig. 4). As the cyclone propagates northwestward, the accompanied moisture fluxes to its northeast convey abundant moisture from the Pacific Ocean to East Asia ( Fig. 4a-d, g-i, and k-m), contributing to intensive rain belts alongshore and over the Eastern China Sea (Fig. 4e, j, o). Further, the best track data (see "Data" section) reveals substantial percentages of rain belt events in P1 (59%), P2 (78%), and P3 (60%) co-occurred with tropical depressions or stronger tropical cyclones (i.e., maximum sustained wind speed exceeding 41 km h −1 ). This finding confirms the role of tropical cyclones in establishing the Pacific corridor in late summer (Fig. 1f), which explains the relatively high rainfall intensity associated with P2 and 3 channels mentioned earlier (Fig. 1e). As for the differences, P1 is accompanied by a cyclonic anomaly at a larger scale compared with that observed in P2 and 3, occupying the entire western North Pacific basin and propagating slowly to the west (Fig. 4a-d). Such synoptic weather pattern may imply the eastward retreat of the WNPSH in inducing the Pacific channel 39 .
Westward extension of the WNPSH On the contrary, the westward extension of the WNPSH is key to the relatively short-range moisture channels from the Bay of Bengal (B2), inland East Asia (O2), and the South China Sea (O3) (Fig. 5). Unlike the SAL-WNPSH coupling and the dual-anticyclone pattern, we observe a standalone anticyclone drawing moisture from the southwest while propagating westward (e.g., Fig. 5b, h, l). The anticyclone occurs 5 days before the events, while we do not find coherent weather regimes beyond a 5-day lead time. This may suggest a rather short window of time for possible forecasting on rain belts associated with the above channels.
We again find the peak arrival timing helpful when interpreting the differences in the scale and location of the WNPSH. As the O3 channel mainly appears from April to May (Fig. 1f), it coincides with the time when the WNPSH mainly resides in the South China Sea 47 and thereby confines the rain belts in the south (Fig. 5l-o). Progressing into June, the WNPSH strengthens and hovers over South China and the entire Philippine Sea. At this stage, the WNPSH is capable of steering moisture from a farther region from the Bay of Bengal and thereby opening the B2 channel (Fig. 5b-d).
In late summer when the WNPSH weakens and migrates slightly to the north, a much shorter O2 channel is formed to supply the rain belts over the Korean Peninsula and South Japan (Fig. 5h-j). These observations explain how the westward extension of the WNPSH in different monsoon stages modulates the rain belts' atmospheric water cycle, and thereby affecting summer rainfall variability in East Asia, as reported in many other studies 7,39,47-49 .
Circumglobal wave trains
As extratropical Rossby wave trains were known to influence rainfall in the subtropical and mid-latitude regions 4,34,50 , it might not be surprising that wave trains could also modulate some moisture channels for rain belts. Here, we identify two circumglobal wave trains (CGTs) 2 weeks ahead based on the wave activity analysis (see "Methods" section). The observed CGTs contribute to One of the CGTs is notable since the initiation of the SA1 channel, which concatenates the upper-level circulations at the subtropics around the globe (Fig. 6a). One important feature of this CGT is that it covers both the Sahara Desert and the Middle East as the wave route penetrates to about 15°N the southmost (Fig. 6c). For this reason, it is much different from other welldocumented wave trains such as the Silk Road pattern 51 or the Europe-China pattern 52 . By averaging the latitudinal band of 15-50°N in the 200-hPa perturbation streamfunction, it is clear that this CGT is guided by the upper-level westerly jet and propagates eastward from day −14 onwards (Fig. 6f). Meanwhile, the eastward-propagating CGT carries a deep trough in South Asia (60-90°E), which acts to steer the SA1 channel to the east till day −7 (Fig. 6a-c, f). Subsequent to that, an upper-level anticyclone reaches South Asia and contributes to the formation of the anticyclone at lower-levels ( Fig. 6c-e), forming the dualanticyclone pattern as discussed earlier (Fig. 3b, c). These interesting observations suggest the predominance of the upper-level wave train in regulating the lower-level circulations and thereby the SA1 channel.
Given such a well-organized CGT with the teleconnection to the SA1 channel, it is of interest to understand its origin. In the diagnosis of the Rossby wave source 53 (RWS; see "Methods" section), strong wave sources are detected at nearly all longitudes, covering the central North Pacific, the Rocky Mountains, the North Atlantic, the Sahara Desert, South Asia, and North China (e.g., Fig. 6g). In particular, we observe prominent wave sources in the Sahara Desert and the Indian subcontinent since day −14 (Fig. 6g-k), which explains the southward shift of wave train to the subtropical deserts (e.g., Fig. 6c). Hence, we hereafter term this subtropical CGT as the Pacific-Atlantic-Saharan-Indian (PASI) pattern.
It is noteworthy that the wave sources mainly come from the vortex stretching term (Eq. (4)), whereas the vorticity advection term is almost negligible (results not shown). Thus, it is likely that the Saharan wave source results from descent motions over the arid regions due to radiative cooling 51 , while the wave sources over oceans are often associated with subtropical highs (results not shown). We notice that the PASI pattern appears only in the SA1 and 2 channels (to be shown next), both of which mainly occur in April to mid-May (Fig. 1f). Taken together, the PASI pattern characterizes a springtime wave train induced by a stronger radiative cooling in both the Sahara and the Indian subcontinent from March through May 54 , plus more intense upper-level westerlies compared to summer months. In addition, there might be a second CGT in the mid-latitudes after day −4 (Fig. 6d, e), which highly resembles a stationary CGT to be discussed later.
Likewise, we observe the PASI wave pattern alongside the SA2 channel 2 weeks ago (Fig. 7a), which features the canonical wave source in the Sahara Desert (Fig. 7g, h). While the PASI pattern is likely to initiate a deep trough in the Middle East to steer the lower-level moisture pathway, it appears short-lived and does not propagate to the east as it does in the previous case. Instead, there is another CGT in the mid-latitudes that dominates the weather regime since day −14. This CGT characterizes an arc path covering the North Atlantic Ocean and Russia (Fig. 7b-d), and it becomes even more prominent when approaching the arrival of rain belt events (Fig. 7e). For convenience, we name such a CGT as the North Atlantic-Russian (NAR) pattern. In terms of the RWS, the North Atlantic wave source appears stronger and extends poleward (e.g., Fig. 7h), while the one over the Indian subcontinent appears weaker compared to the previous case (e.g., Figs. 6i and 7i). These observations may explain the short-lived PASI pattern and the dominance of the mid-latitude NAR pattern.
It also turns out that the NAR wave train is largely stationary, albeit with some slightly drifting wave peaks (Fig. 7f). For this reason, it keeps a long-standing deep trough over Siberia, which later penetrates southward and merges the South Asian trough (Fig. 7c, d). The influence of the resultant large-scale trough over the Asian continent is two-fold. On the one hand, it guides the inland moisture channel from South Asia to East Asia; on the other hand, it advects vorticity and cold-and-dry airmasses that both facilitate the frontogenesis to develop rain belt events in East Asia. Despite the circulation anomalies in the mid-troposphere and upper-troposphere associated with the NAR pattern are mostly consistent (Fig. 7a-e), we notice a westward tilt in the vertical axis between the 200-hPa and 850-hPa cyclonic circulations over Siberia since day −7 (Supplementary Fig. 4b, c). This observation infers a baroclinic instability that intensifies the Siberian trough according to the quasi-geostrophic theory 55 .
It is interesting to see that the NAR pattern also pre-exists in the B3 and O1's weather regimes. Yet, the NAR pattern associated with the B3 channel is much weaker (Supplementary Fig. 5a-f), while the mid-latitude channel (O1) is governed by a reversed NAR pattern ( Supplementary Fig. 6a-f). Noticeably, the O1 channel directs eastward even along the southern flank of the anomalous anticyclone in Siberia, which implies the predominance of the mid-latitude westerlies in the background. As the NAR pattern continues to strengthen, it gives rise to an anomalous cyclone in eastern Siberia ( Supplementary Fig. 6c-f), which steers the latter part of the O1 pathway and facilitates the lower-level convergence for rain belt formation in the midlatitudes ( Supplementary Fig. 4m-o).
DISCUSSION
The overarching goal of this work is to unearth the dominant moisture channels for the EASM rain belt events and improve the knowledge about the pre-existing weather systems at lead times up to 2 weeks. Our findings, to some extent, challenge the traditional perception of moisture corridors for EASM rainfall from the Indian Ocean, South China Sea, western north Pacific, and Eurasia 3,14,24,25,28 . Based on the trajectory clustering, we obtain 15 moisture channels in four corridors, including the Somali, South Asian, Bay of Bengal, and the Pacific corridors. In particular, the Somali and South Asian corridors turn out to be crucial for supplying moisture to East Asian rain belts, but received less attention in the literature. Our results also update the knowledge pertaining to the minor role of the moisture channels from Eurasia and the South China Sea, as well as the absence of a crossequatorial channel from Australia for East Asian rain belts.
The importance of terrestrial sources on the downwind continental precipitation gains increasing attention recently 13,15,29 . Here, we add that nearly half of the moisture channels to East Asian rain belts collect moisture mainly from terrestrial sources. Plus, we see a substantial number of rain belt events with moderate strength and persistence rely on terrestrial moisture supply (i.e., the South Asian corridor). In addition, if the long-range oceanic moisture channels are still present in a warmed climate, we would expect more extreme rain belt events due to the increase in both the water-holding capability of the atmosphere 56 and the evaporation over oceans 57 that fuel the moisture channels aloft.
Our results highlight several key weather systems that pave the moisture channels. In particular, the SAL-WNPSH coupling establishes the long-range Somali channels for the Pre-Meiyu and Meiyu rain belts, which pose hydrologic risks to South China and the mid-lower Yangtze river basin. By contrast, the interplay of two anticyclones in South Asia and the western North Pacific explains the South Asian moisture channels in early-summer and mid-summer. Similar signals for the SAL-WNPSH coupling and the dual-anticyclone pattern were also noted in our previous study based on correlation maps of the meteorological fields with moisture contributions from the Arabian Sea and the Indian subcontinent 13 . Here, we again observe these interesting weather patterns and examine their controls on the moisture channels in greater detail. We argue that the upper-level divergent winds originated from the SAL could contribute to the later development of the WNPSH and thereby explain their coupling. Other essential weather regimes look familiar to us, including tropical cyclones and the zonal oscillation of the WNPSH 39,48,49 , which steer a number of moisture channels from the Pacific basin, Bay of Bengal, and regions in the vicinity.
The most interesting finding perhaps lies in the interaction between global wave trains and regional circulations, as manifested by the two CGTs that influence the moisture channels for East Asian rain belts. One is the springtime PASI pattern that propagates eastward and passes by the Sahara, the Middle East, and the Indian subcontinent. This wave train carries a South Asian trough that steers a South Asian channel. Another is the NAR pattern that is more common in several moisture channels, which modulates the synoptic-scale weather regime over Siberia to facilitate moisture tracks and the rain belt formation. It is worth mentioning that the NAR pattern considerably resembles the Russia-China pattern found in our previous study on summertime precipitation in eastern China 4 , which may suggest the crucial role of the NAR pattern in the East Asian warm season.
Taking into account the average lifetime of moisture in the atmosphere 58 , we limit our analysis to 2 weeks prior to the rain belt events for exploring the essential weather regimes. There could be additional drivers across a broader spectrum of time scales that influence the moisture channels. Teleconnections and climatic forcings at longer time scales, such as the 30-60-day boreal summer intraseasonal oscillation (BSISO 59 , which has a mode that resembles the SAL-WNPSH coupling), the El Niño-Southern Oscillation 60-62 , the North Atlantic Oscillation 63 , the Arctic sea ice variations 64 , and global warming 11,65 , have known to affect the East Asian rainfall variability. How these slowly varying forcings and oscillations interact with the moisture channels to the East Asian rain belts is worth investigating in the future. The CGTs discovered in this study may link with the summertime Rossby wave breaking 51,66 that warrants further research. We also look forward to future studies on the predictive skills of the discovered weather regimes before the rain belts.
With a more holistic picture of the dominating moisture channels, the key weather patterns and the teleconnections within a 2-week lead time, findings from this work would help better understand the hydrologic cycle of rain belts and benefit weather diagnosis, numerical model evaluation, and short-term heavy rainfall forecasting in East Asia.
METHODS Data
Meteorological variables are retrieved from the fifth generation of the European Center for Medium-Range Weather Forecast (ECMWF) atmospheric reanalysis data (ERA5) between 1981 and 2018 at 1°grid resolution for diagnoses 67 . ERA5 data with a higher spatial resolution (0.25°× 0.25°) is adopted as the input for the moisture tracking model.
Dynamical recycling model (DRM)
We employ the semi-Lagrangian DRM 35,70 to perform moisture backtracking. It has been adopted to derive the contributions from local or external sources in different monsoon regions with reasonable results at a low computational demand 13,20,70,71 . The recycling ratio R in the DRM represents the fraction of precipitation in a sink grid recycled from a source's evapotranspiration along the backward trajectory. It can be computed analytically with a semi-Lagrangian scheme 35 : where E and W are evapotranspiration and precipitable water in the semi-Lagrangian coordinate (x′, y′, t′), respectively. This equation is also equivalent to the sum of the relative contribution from each source along the trajectory 70 . Following our previous work 13
EASM rain belt events detection
We detect rain belts within the East Asian monsoon domain (15-45°N, 105-145°E) from April through September in 1981-2018 based on the following criteria. First, the rainfall amount at each 1°grid cell needs to be greater than its local threshold, which is the smoothed 80th percentile of the wet day precipitation (>1 mm day −1 ) by the Gaussian kernel smoothing. By connecting the heavy rainfall grids from eight directions with no gaps allowed, a rain belt is detected if its zonal extent is greater than 10°of longitude. Here, we present the idea of assigning rain belts to the same event if they are fed by similar sources. For any pair of rain belts occurring on consecutive days, they are deemed the same event if the Euclidean distance D in the recycling ratios of their source-receptor networks is less than 10%. Namely, where R i and R j are the arrays containing the 30 sources' recycling ratios (in %) for the pair of rain belts. We set the threshold in D to be 10% as it refers to the level after the peak in the distribution based on all pairs of rain belts ( Supplementary Fig. 7). All individual rain belts are assigned to an event with the smallest D (i.e., having the most similar source-receptor network). We obtain up to 1265 high-impact rain belt events for analysis, each having at least one rain belt with over 90% of it in a nested monsoon domain of 20-40°N, 110-140°E 1 . Examples of the detected events are given in Supplementary Fig. 8 for readers' reference.
Trajectory clustering
Based on our preliminary analysis, the East Asian rain belt events are supplied by several moisture supply channels (e.g., Supplementary Fig. 8b, d). As such, we employ an Expectation-Maximization (EM)-based curve clustering algorithm 72 to classify the DRM-derived moisture trajectories from the first rain belt affecting the EASM domain in all events. A 4th order polynomial regression model is trained in the curve clustering. The optimal number of clusters is selected to be 15, at which the gradient of the trained likelihood becomes small and fluctuated ( Supplementary Fig. 9). Despite the same gradient of likelihood when using 13 or 15 clusters, the results based on 15 clusters generally produce a cleaner and more reasonable clustering compared to that using 13 clusters, as illustrated in Supplementary Fig. 10. Finally, we assign a rain belt event to a trajectory cluster if over 30% of the first rain belt's back trajectories affecting the EASM domain belong to that cluster. We leave only 7% of the events without cluster membership and 21% with two or more memberships through this way.
We are also interested in the Rossby wave source (RWS, s −2 ), which can be quantified by 53 : where v χ is the divergent wind (m s −1 ), η is the absolute vorticity (s −1 ), and D the divergence of wind (s −1 ). As such, the RWS is contributed by the rate of change of vorticity due to vortex stretching (i.e., −ηD) and the vorticity advection by v χ . Both the wave source and the wave activity fluxes are computed on spectral harmonics. We select the 200-hPa pressure level for investigation as wave sources generally peak at this level 74 .
Statistical significance
A two-tailed one-sample Student's t-test with a significance level of 0.05 is adopted to assess the statistical significance of the meteorological field anomalies shown in the composite maps (e.g., Figs. 2-7). The t-test is performed with the null hypothesis that the anomalies come from a t-distribution with a mean equal to zero and unknown variance.
DATA AVAILABILITY
The meteorological data is retrieved from the ERA5 by the European Center for Medium-Range Weather Forecast (ECMWF) at https://www.ecmwf.int/en/forecasts/ datasets/reanalysis-datasets/era5. The IBTrACS best track data can be accessed from https://www.ncdc.noaa.gov/ibtracs/index.php?name=ib-v4-access. Derived data supporting the findings of this study are available from the corresponding author upon reasonable request.
CODE AVAILABILITY
The source code for the DRM can be accessed by https://github.com/huancui/ DRM_2LDRM. | 2021-05-28T13:58:56.583Z | 2021-05-28T00:00:00.000 | {
"year": 2021,
"sha1": "c3a91da9a02445d08d8289fa90c91ded6b90cd78",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41612-021-00187-6.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "c3a91da9a02445d08d8289fa90c91ded6b90cd78",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
225665346 | pes2o/s2orc | v3-fos-license | The new public management theory looks at the socialization reform of college catering service under the valve
The socialized reform of college catering service should draw lessons from the theory of new public management, take the road of marketization, re-divide the division of government and social forces, and develop the college catering service market. Innovate characteristic college catering with information technology and precise positioning of audience demand; The quality of service is guaranteed by perfect management system, excellent service team and multi-party supervision mechanism.
Introduction
As a quasi-public good, college catering service has a broad sense of public management function. Under the planned economy system, the university logistics operates according to the system of supply and welfare [1].For a long time, the supply mode of college catering service in China has been a single self-supply mode, namely the self-run mode of universities. With the development of social and economic level, the facilities, management and operation mechanism of logistics in colleges and universities have been unable to meet the needs of social and economic development and the development of higher education. As the center of logistics service, how to adapt to the needs of rapid development of social economy and higher education, meet the market, and promote the development of education is an urgent problem in front of us [2]. The study of catering service in universities through the new public management theory can find the correct direction for the socialization reform of catering service in universities, which is conducive to exploring an effective catering service mode in universities.
The socialization of catering service in colleges and universities is consistent with the theory of new public management
In the 1980s, Britain, the United States and other countries launched a so-called "new public management "government reform movement, which rapidly spread and expanded to almost all the developed industrial countries. Each country has launched a radical administrative reform, and the public management mode has undergone a fundamental change. In this process, the theoretical and practical mode of administrative reform is called the new public management theory. The basic management thought of the new public management theory is the concept of public management embodied in the enterprise management approach (also known as "B" approach), and the related theories of public sector management methods are re-studied. Its core idea is to introduce the management means and market incentive structure of the private sector into the public sector and public services, so that the public sector can fundamentally change the relationship between the government and society from the perspective of the transformation mechanism, and finally replace the AESEE 2020 IOP Conf. Series: Earth and Environmental Science 512 (2020) 012068 IOP Publishing doi:10.1088/1755-1315/512/1/012068 2 traditional bureaucratic system with the new public management model. In the 1999 public management development report "governance in transition" published by the organization for economic cooperation and development (OECD),the characteristics of new public management can be summarized into eight aspects: transferring authority and providing flexibility; ensuring performance, control and accountability; developing competition and choice; providing flexibility; improving; Human resources management; optimizing information technology; improving control quality; strengthening the central guidance function. It can be said that the new public management aims at the government governance model that seeks to replace the traditional bureaucratic system, carries out reforms in such aspects as repositioning government functions, changing the way of providing government services, reforming the internal management system of the government, and introducing the technology of the private sector to redefine the relationship between the government and the market. The management mode of improving the level of public management and the quality of public service by means of marketization [3].
At present, China has put forward the idea of building a new logistics support system for colleges and universities that "the government performs its duties, the market provides services, the schools make their own choices, the industries are regulated and the departments are supervised according to law". Its core is "the combination of public welfare investment and market-oriented operation". The catering service, as an important part of the logistics reform of colleges and universities in China, is basically consistent with the marketization theory in the new public management theory in the basic ideas, goals and main tasks [4]. The new public management theory provides a reference for the social reform of "quasi-public goods" in colleges and universities, which is of guiding significance for the socialization reform of catering services in colleges and universities.
Innovate characteristic college catering with information technology and precise positioning of audience demand
The new public management theory regards the public as the customer and emphasizes the customer's right to choose. In 2019, the survey data of catering service satisfaction of two universities in xi 'an showed that the variety satisfaction was 42% and the quality satisfaction was 23%(Figure1).College catering service faces a specific group, namely the majority of college teachers and students, which puts forward higher requirements for college catering service, service concept and other aspects of continuous innovation, improve the service quality and level.
Construction of college catering service information platform construction, innovation of college catering service
The emergence of O2O model has a great impact on college canteens. At the same time, the existing o2o model has a great credibility problem due to the intermingled good and bad shops, leaving hidden dangers for college students' food safety. When dealing with this problem, college catering service departments should not only adopt certain restriction means, but also follow the trend, build their own information platform, improve the service level and win back the sales. College canteens should seize the opportunity to promote the development and innovation of college catering services. At present, most college canteens are equipped with computer network, software and hardware foundation in daily management, and employees have certain operating skills. However, due to the lack of information 3 management platform, they have not played their due role in practical work. Therefore, it is necessary to build an information platform for college canteens to realize the collaborative management of cross-campus and multi-canteen business, real-time synchronization of operating data, effective cost control, simplified process and improved service quality. Secondly, college canteens should set up online ordering and delivery services. College teachers and students have a strong ability to accept new things, and the combination of online ordering mode and canteen mode can not only guarantee food hygiene and safety, low price, but also alleviate the problem of crowded canteen. The delivery personnel can employ work-study staff and provide door-to-door service for students of our school, which is more convenient. This model can also build an internship base to provide internship and entrepreneurship opportunities for students majoring in e-commerce and those interested in it.
Refine the catering service needs of college teachers and students, and advocate and provide diversified and humanized services
There are a large number of consumer groups in universities, and the demand for catering is obviously diversified. Refining the catering service needs of college teachers and students, providing diversified and humanized services can be regarded as a development direction of college catering, which can provide more innovative points for the development of college catering, and more choices for college teachers and students. Specifically, it can be adjusted from the following aspects: First, scientific site selection, reasonable distribution. Second, optimize the function for the convenience of teachers and students. Third, characteristic window, accurate positioning.
Perfect management system, excellent service team and multi-party supervision mechanism
guarantee service quality The important content of logistics socialization reform in colleges and universities is to do a good job in catering service to meet the needs of teaching, scientific research and the diet culture of teachers and students. Therefore, it is an ideal practice to guarantee the whole service quality through system constraints in advance, professional assurance in the process, multi-party supervision, post evaluation and feedback.
Improve and strictly implement the management system, system constraints management
Perfect system is an important guarantee to ensure safe and effective operation of college catering service. First of all, college catering service should be IOS9001 series certification as the leading service standardization, strict implementation of various laws and regulations. Second, catering service in colleges and universities should accord to the actual situation, and perfect a series of rules and regulations, such as material procurement, procurement procedures, procurement material acceptance system assessment method, cooperative mechanism of food safety and hygiene rating standards, food packaging materials, services, post responsibilities and job requirements, etc., make sure to institutional constraints. Finally, the post responsibility system of university catering service was established, namely, the accountability mechanism of the new public management was advocated, the responsibility for food safety was decomposed and implemented to the department and the specific responsible person, and the responsibility letter was signed to ensure that the responsibility for safety was on the job. At the same time, in order to ensure the executive ability, post responsibilities and food production operation process can be posted and hung, as to achieve "system on the wall", forming a good canteen system management culture atmosphere.
Introduce professional talents and establish training and incentive mechanism
The high quality service level depends on the high quality service team. New public management theory attaches great importance to human resource management, and emphasizes the improvement of flexibility in personnel recruitment, tenure, salary and other aspects of personnel management [2]. At present, college catering service personnel are mainly temporary recruited social personnel, service personnel mobility is strong, the team is not stable, at the same time due to the low cultural quality, comprehensive quality is low. There are many problems in talent introduction and incentive in universities, which bring many difficulties and hidden dangers to the management and service of logistics catering in colleges and universities.
Perfect supervision mechanism, multi-party participation in supervision
College catering service should not only follow the market rules, but also take into account the public welfare and education. In order to ensure public interests, we must further improve the supervision mechanism to achieve effective monitoring. The new public management believes that the role of the internal top-down supervision mechanism in traditional administration is limited, and the optimization of public services must be realized through the expansion of democratic participation. Therefore, this paper suggests that in the operation and management of school catering services, not only the logistics management personnel should be responsible for, but also all groups within the campus should be included in the scope of the supervision system. First, a leading group of catering service work in colleges and universities was set up, with the President as the group leader, to formulate the system for school leaders to dine regularly in the student canteen, in order to listen to students' opinions on the canteen, and put forward specific opinions and guidance on the work. Second, set up a school food committee to supervise the committee by students as the main body. When making decisions related to students' rights and interests such as price adjustment in the canteen, organize students' representatives to participate in the hearing, extensively solicit opinions from students and seriously consider adopting or making explanations that are not adopted. The school food committee organizes student representatives to visit the canteen, participate in the canteen purchase supervision. Organize student representative seminars, so that students can have direct talks with canteen management staff; Thirdly, the school's trade union and catering service center will invite teachers' representatives to hold a canteen work forum to put forward opinions and Suggestions on the operation and service of the canteen. Fourth, the functional departments of the canteen operation process and financial accounting of the whole process of supervision. Table 1. Responsibilities of college food committee 1 Organize student representatives to attend the hearing 2 Organize student representatives to visit the canteen 3 Participate in canteen purchasing supervision 4 Market research 5 Organize seminars for student representatives 6 Carry out satisfaction questionnaire survey
Conclusion
The socialization reform of college catering service is a complicated and long process. Based on the theory of new public management, this paper advocates that colleges and universities should cooperate with social forces by acting as guides and coordinators to develop off-campus catering projects and enrich service contents. And through the perfect management system, the high quality service team and the multi-participation management mechanism to realize the university catering service socialization reform. The future research can also try to deeply explore the specific items of catering service in universities. | 2020-06-25T09:06:17.531Z | 2020-06-18T00:00:00.000 | {
"year": 2020,
"sha1": "8fd4ec663ec9acb86a43848f1aa331659374238f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/512/1/012068",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "93778cfa31552a76765c4533a432491ecea9ce1f",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
268252398 | pes2o/s2orc | v3-fos-license | Développement d’un guide pratique pour l’utilisation de l’échographie dans la polyarthrite rhumatoïde: Recommandations préliminaires
RESUME Objectif: Décrire le protocole d’élaboration des recommandations pour l’utilisation de l’échographie dans la prise en charge de la polyarthrite rhumatoïde (PR) en pratique courante. Méthodes: Il s’agit d’un protocole de recommandations de bonnes pratiques. Suite à une revue systématique de la littérature, un comité scientifique de 6 experts en échographie a identifié les questions clés qui seront utilisées pour l’élaboration des recommandations. Ces recommandations seront par la suite soumises à une validation par un groupe d’experts en échographie ostéo-articulaire en utilisant la méthode Delphi. Cette première étape débouchera sur des recommandations préliminaires. Par la suite, les recommandations préliminaires seront soumises lors d’une réunion présentielle en ligne à un groupe élargi d’experts en échographie osté-articulaire pour vérifier leur pertinence. Le niveau d’accord des experts sera enregistré. Résultats: Suite aux deux tours Delphi, le consensus portera sur i) L’utilisation de l’échographie pour le diagnostic positif de la PR à un stade précoce, ii) La surveillance de l’activité de la PR; et iii) La gestion de la rémission. Conclusion: Ces recommandations visent à harmoniser et optimiser la pratique clinique et la prise en charge des patients atteints de PR.
Rheumatoid arthritis (RA) is the most frequent chronic articular disease, involving mainly the small joints of hands and feet [1].In the absence of adequate treatment, RA may lead to irreversible disability and even to premature death [2].In order to monitor disease activity and progression, several outcome measures and tools have been developed [3].One of the most important predisposing factors for subsequent joint damage has been recognized as the presence of persistent active synovitis [4].However, active synovitis are not always detected by clinical examination [5].In such cases, physician will not optimize the disease modifying therapy, which will lead to slowly and silently bone erosions, cartilage damage and tendon tear.In that context, the osteoarticular ultrasonography (US) seems to be an interesting tool as it allows an assessment of active synovitis, tenosynovitis and erosions [6].Definition of elementary US lesions are well established since several years [7].Scoring systems to quantify the importance and the activity of inflammatory lesions are also codified and the most used one is the semi-quantitative score of EULAR (European League of Rheumatology) [8].However, the practical use of the US (eg; material to use, joints to assess at diagnosis or followup, frequency of assessment, the written medical report) are not clearly mentioned in previous literature.The French society of rheumatology has published in 2019 a practical guideline for the use of US in RA [9].As US is constantly evolving, new data is available this past five years.Thus, the aim of this paper was to describe the protocol of the most up-to-date practical use of US for the diagnosis and the follow-up of RA.
The present study is a protocol design for a practical guideline.The development of this guideline involved a steering committee (SC) composed of six rheumatologists experienced in musculoskeletal ultrasonography (KBA, HA, SS, AH, RB, AE in the authors' list).The general organization of the procedure for elaboration of the recommendations is illustrated in Figure 1.
Definition of questions within each theme by the steering committee
At the initial task force meeting, members of the SC raised clinically relevant questions related to key aspects of the use of US in RA.The research questions were agreed by consensus and five final research questions were selected, which encompassed the following topics: i) Overarching principles: The role of US in the management of RA, which probes and equipment to use, which imaging modalities (B mode, Doppler mode) to recommend, which definition of inflammatory and structural lesions to adopt?
ii) Definition of the sites to be examined: Where to assess inflammatory (synovitis, tenosynovitis) and structural (erosions) lesions?
iii) Diagnosis: What is the value of US in the diagnosis of inflammatory arthralgia, early arthritis and early RA? iv) Follow-up and therapeutic response: What is the value of US in the follow-up and evaluation of the response to RA treatment?v) Remission: What is the value of US in the management of RA remission?Systematic review of the literature A systematic search for articles was performed during a face-to-face session of the SC.During this session, some databases such as PubMed/Medline, Embase and Cochrane were assessed using the combination of the following Medical Subheading terms: (Rheumatoid arthritis) AND (ultrasonography OR ultrasonography, Doppler OR ultrasonics).The following inclusion criteria of studies selected were: Adult population, publications in English or French and no date limit.The articles selected were classified according to the three major topics: i) Overarching principles, ii) Diagnosis; and iii) Follow-up and remission.Every two members of the SC handled a topic and had for mission to deeply analyze literature and select relevant articles.Later on, an online meeting was scheduled where every team exposed the results of the systematic literature search and preliminary recommendations were developed and written.
Validation of the recommendation according to the Delphi process
This step will follow the previous one of elaboration of preliminary recommendations.A two-round Delphi consensus [10] will be conducted through a GoogleForm® questionnaire, which will be dispatched by email to an expert group of 30 rheumatologists experienced in musculoskeletal ultrasonography.The rheumatologists that will be selected will have a minimal number of 5 years of regular US practice (ie; 10 rheumatologists from each country: Tunisia, Morocco and Algeria).The questionnaire The 30 rheumatologists will be asked to respond within two weeks.A reminder email will be posted to nonresponders after one week.The rheumatologists will have to rate their level of agreement for each item using a Likert scale ranging from 0 (totally disagree) to 5 (totally agree).Additional free spaces will be reserved for additional suggestions at the end of each section.An agreement will be considered if more than 75% of rheumatologists attribute a level of agreement greater than 3, and the item will be included in the recommendations from the first round.A disagreement will be considered if more than 75% of participants rate a level of agreement less than 2, and the item will be definitely excluded.If the statement will not respond to one or another of the cited situations above, it will be included into the second round of the survey.The second questionnaire will be sent two weeks after starting the Delphi process.After one week, a reminder email will be also issued to non-responders.The statements rated more than 3 by more than 75% of participants will be retained in the recommendations.
Experts' opinion and elaboration of the final recommendations
The recommendations validated by the Delphi process will be presented in three concomitant workshops in Tunisia, Morocco and Algeria.In each country, the workshop will be attended by local members of the SC, the panel of 30 experts who participated at the Delphi process (10 members from each country), as well as rheumatologists with expertise in the management of RA, whether they performed US or not.The objective of these workshops will be to assess the relevance, exhaustiveness and comprehensibility of the proposed recommendations.The literature review, which allowed the elaboration of the preliminary recommendations will be presented and then the recommendations will be discussed in each workshop.
Validation of the final recommendations
Following the three workshops, a webinar including the members of the SC will be organized to report experts' opinions and comments.When the suggestions from each session will be agreed, a single wording will be retained.Otherwise, the wording proposals from each workshop will be discussed.Then, the degree of agreement for the final wording of each recommendation will be evaluated on a Likert scale graduated from 0 (totally disagree) to 5 (totally agree) by a vote.US standardization, considering the particularities of each affected joint or tendon by RA, is certainly a requirement.Indeed, over the last two decades (ie; 2003-2023), a growing number of studies have been published aiming at investigating the role of US in the diagnosis and follow-up of patients with RA [11][12][13].
In RA, US is helpful to detect early synovitis and is also sensitive in the identification of bone erosions [14,15].Although numerous studies are published in that field, scores to use and sites to assess are not yet consensual and are yet pending issues.Besides, US is an imaging modality relatively easy to set up in a clinic practice, but it is often regarded as being operator dependent with associated reproducibility issues [16,17].Its reliability is variable among studies and have still to be improved.Quality of US devices and probes is also surely an item of major importance [18].Choice of equipment and selection of parameters to be used are also pending issues.To resolve these controversies, the first solution may be the implementation of US courses for a large number of rheumatologists and improving training will enhance competency of sonographers and therefore US reliability.The second solution may be a consensual use of US according to different situations when treating RA (ie; diagnosis, follow-up, and remission).As number of points regarding employment of US in rheumatological daily practice need to be elucidated, these recommendations may guide ultrasonographers in daily practice.
US is a quick and safe tool useful to complement the physical examination of RA patients.Adequate and guided use of this imaging modality help rheumatologists not only for diagnosing RA, but also during follow-up and for the management of remission.
Figure1.
Figure1.General organization of the development of US recommendations in RA US: Ultrasonography, RA: Rheumatoid arthritis | 2024-03-07T06:16:46.583Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "79deb6293aafc44545e66f2a455234defd1206f8",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "921fd37b60e728ba5e82e4f3e30ae91f351a2a40",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
197583661 | pes2o/s2orc | v3-fos-license | Methodology for the Selection of In-Seam Gas Drainage System for Intensive and Safe Coal Mining Synops
. The article reviews general principles of selecting efficient solutions of in-seam gas drainage and provides analytical foundation for selecting parameters of in-seam gas drainage with due account for estimated output of production face. The schemes of degassing preparation at the production facilities of Kuzbass are presented. Recommendations are provided on the selection of in-seam gas drainage methods at the production areas of Kirova Mine, JSC SUEK-Kuzbass.
Introduction
Ensuring risk free environment in coal mines is currently of vital importance [1,2]. Worldwide extensive research is being conducted on issues related to effective degassing of highly gassy coal seams [3][4][5][6][7][8][9][10]. The coal mining company JSC SUEK-Kuzbass is the industry leader in Russia and issues of effective in-seam gas drainage are of great importance to ensure profitable and safe mining.
Approach to the selection of the degassing preparation method
We believe that the methodological approach to the selection of the coal seam degassing preparation method should be based on the following key factors: -the predicted rate of coal seam methane yield determined at the stage of experimental work when the main characteristics and condition of the coal-and-methane-bearing reservoir (in-seam pressure, permeability, sorption characteristics of the coal seam) are the assessed; -time required for in-seam gas drainage; -values of the "gas content threshold" at estimated face output. The integrated system of coal seam degassing preparation normally includes principal technology and auxiliary technological schemes that have successfully passed approvement in underground conditions. Both international [11][12][13][14][15][16] and domestic techniques can be used for adequate determination of fundamental properties and characteristics of a coal seam.
The evidence from practice shows that at the depths where mines are currently operating the real effectiveness of gas drainage is often at 10 ÷ 15%, which this is not enough to remove the restrictions on the enhanced face output. Therefore, designing and implementing more efficient in-seam gas drainage technology is becoming the key task to ensure methane safety of mining operations and significantly raise per face performance.
In-situ experiments
In our opinion, methodological recommendations on the selection of viable technological schemes for the degassing preparation of coal seams for subsequent production should include the implementation of two main stages, which we shall demonstrate by example of extraction panels 24-58, 24-59 and 24-60, Kirova mine (Table 1). Table 1. Methodological recommendations on the selection of in-seam gas drainage method for extraction panels 24-58, 24-59 and 24-60, Kirova mine.
Standard in-seam gas drainage undertaken through wells drilled from development workings is used as principal (base case) technology. Parameters of this method are provided in Gas Drainage Operations Manual (2012).
Experimental work has been carried out on testing and approving auto-pneumatic impact method to justify the recommendation to use this technique to boost gas drainage in the production area 24-55. The essence of this method is described in [17].
The technology of hydrodynamic impact with underground hydraulic fracturing (UHF) of the coal seam to be drained is an auxiliary degassing scheme and is protected by patent [18]. It was tested in panel 24-58 (12 wells UHF) [19][20][21], it facilitated 3 to 4-fold growth of methane yield from in-seam gas drainage wells drilled in the hydrofracture zones, which Phase 1 Target coal production. The presence and magnitude of the "gas content threshold" Predicted methane yield from unrelieved coal seams taking into account in-seam pressure resulted in significant improvement in terms of gas content in the longwall and allowed to raise face output.
Results of field experiments and further discussion
The effectiveness of underground hydraulic fracturing technology was evaluated in the process of longwall mining in panel 24-58. Comparison of the averaged longwall performance parameters in the hydraulic fracturing zone and those in the control zone is shown in Table 2. It can be seen that in the areas of production face where hydraulic fracturing was performed the average value of relative gas content dropped by 30%, production increased on average by 21%, and process stoppages associated with the "gas content threshold" decreased by 42%.
Improved effectiveness of in-seam gas drainage in the areas where hydraulic fracturing was performed in panels 24-58, 24-59, 24-60 and 24-62 justified the launch of large-scale gas drainage program in panels 24-63 and 24-64. The plan for 2019-20 is to implement the technology of advance in-seam gas drainage through surface boreholes combined with hydraulic dissection of the Boldyrevsky coal seam.
One of the important factors in selecting the in-seam gas drainage technology is the value of the "gas content threshold" and the determination of the in-seam gas drainage effectiveness required. Nomogram to determine the required yield of mehtane by in-seam gas drainage as applied to the Boldyrevsky seam at Kirova mine is shown in Figure 1. It helps to make a reasonable choice of in-seam gas drainage system depending on the required face performance.
Analyzing the situation in the mines of JSC SUEK-Kuzbass, following observations could be made. The Kotinskaya, the Taldinskaya-Zapadnaya and the Kirova mines have the highest per face output. Two of these -the Kotinskaya and the Kirova mines -are of interest in terms of gas factor. Analysis of per face output restrictions due to gas factor for a number of extraction panels at the Kirova and the Kotinskaya mines shows that in-seam gas drainage with a design efficiency of 0.2 to 0.3 is required for the majority (up to 80%) of production faces. In-seam gas drainage practice at the aforesaid extraction panels of the Kirova mine confirms that such efficiency, under favorable conditions, can be achieved using advanced methods of pre-drainage undertaken from development workings (in particular, the option of holistic gas drainage technology is shown in Table 1). In case reliable information about tectonically stress-relieved and tectonically stressed zones (TSRZ and TSZ) is available for particular mine areas, these parameters can be adjusted.
Gas drainage in the TSRZ zone is to some extent similar to gas drainage of coal seams relieved from ground pressure due to underworking or overworking of these seams. In TSRZ zones the parameters may be changed, i.e. hole spacing, i.e. distance between the holes, may be increased. The distance between the in-seam gas drainage holes at the Kirova mine can be increased, for example, from 12 to 18 meters, specifically in the areas of UHF, but this requires additional field trials, which is included in the program for further research.
The TSZ zone gas drainage is to some extent similar to gas drainage of coal seams unrelieved from ground pressure (either a single seam or the first seam in the series of strata). These coal seams have significantly lower permeability and can potentially be prone to outbursts. These parts of coal seams require mandatory use of complex in-seam gas drainage, including both basic (main) technologies and auxiliary active operations aimed at enhancing intrinsic gas permeability. Hydraulic fracturing of coal seams, for example, can be used as auxiliary seam treatment. In such cases, it is advisable to increase the volume of water injected, since moistening of coal seams increases their quasiplasticity and therefore reduces the outburst hazard. When hydrofracturing technology or, especially, hydraulic dissection is used, it is advisable to apply propant agent to fix gas draining cracks which can be more intensively closing in the TSZ areas. Specific recommendations should be based on the quantitative characteristics of the TSZ and TSRZ and specific properties (primarily gas permeability) and characteristics of the gas-related condition of the coal seam in these zones.
Conclusions
1. In-seam gas drainage technologies were studied in-situ whereby hydraulic fracturing and autopneumatic impact techniques were tested at the Kirov Mine production areas. Appraisal was made of their effectiveness and prospects for their further use as supplemental and boosting techniques as part of holistic in-seam gas drainage program. | 2019-07-19T20:04:02.171Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "3367bf5f2fb4ad46a382012e6f8ceffba0d4ece0",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/31/e3sconf_iims18_01032.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b3c1c92bf99191c4aaf252e122bd88166282c882",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
17308991 | pes2o/s2orc | v3-fos-license | Managing resources in NHS dentistry: using health economics to inform commissioning decisions
Background The aim of this study is to develop, apply and evaluate an economics-based framework to assist commissioners in their management of finite resources for local dental services. In April 2006, Primary Care Trusts in England were charged with managing finite dental budgets for the first time, yet several independent reports have since criticised the variability in commissioning skills within these organisations. The study will explore the views of stakeholders (dentists, patients and commissioners) regarding priority setting and the criteria used for decision-making and resource allocation. Two inter-related case studies will explore the dental commissioning and resource allocation processes through the application of a pragmatic economics-based framework known as Programme Budgeting and Marginal Analysis. Methods/Design The study will adopt an action research approach. Qualitative methods including semi-structured interviews, focus groups, field notes and document analysis will record the views of participants and their involvement in the research process. The first case study will be based within a Primary Care Trust where mixed methods will record the views of dentists, patients and dental commissioners on issues, priorities and processes associated with managing local dental services. A Programme Budgeting and Marginal Analysis framework will be applied to determine the potential value of economic principles to the decision-making process. A further case study will be conducted in a secondary care dental teaching hospital using the same approach. Qualitative data will be analysed using thematic analysis and managed using a framework approach. Discussion The recent announcement by government regarding the proposed abolition of Primary Care Trusts may pose challenges for the research team regarding their engagement with the research study. However, whichever commissioning organisations are responsible for resource allocation for dental services in the future; resource scarcity is highly likely to remain an issue. Wider understanding of the complexities of priority setting and resource allocation at local levels are important considerations in the development of dental commissioning processes, national oral health policy and the future new dental contract which is expected to be implemented in April 2014.
Background
The resources associated with providing NHS dental services are sizable. In December 2010, a Department of Health publication outlining proposals for piloting a future new dental contract in England, stated that NHS dentistry accounted for almost £3 billion of public expenditure (including patient charges) [1]. Primary Care Trusts (PCTs) in England currently manage devolved finite resources as a result of the new General Dental Services (nGDS) contract introduced in April 2006. Until that time, PCTs had little control over the national NHS dental budget as resources were held centrally and administered largely through dentists submitting their requests for payment on a fee-per-item and/or capitation basis. Many stakeholders incorrectly viewed the centrally held NHS dental budget as 'non-cash limited' and arguably, few would have referred to NHS dental services as particularly 'resource constrained'.
The nGDS contract in England and Wales charged PCTs (and Local Health Boards in Wales) with the responsibility for ensuring that appropriate dental services were developed which were tailored to the needs of local populations. Until 2006, local health organisations had never before been placed in such a prime position through which to shape local NHS dental services using local commissioning. Despite this, significant concerns have been expressed regarding the great variation in the commissioning skills between PCTs. The Health Select Committee Report on Dental Services published in July 2008 criticised the commissioning arrangements for NHS dentistry by highlighting: 'In-house commissioning skills vary greatly between PCTs. As the Minister acknowledges, too many PCTs are not doing a good job...' [2].
The Government response to the Health Committee Report acknowledged that work needed to involve addressing the continued variability in the quality of dental commissioning [3]. At the time of hearing the evidence, the Health Select Committee was informed that the role of the PCT was 'currently very weak' [4], and a national survey by the Patients Association published in March 2008 similarly criticised PCTs for a lack of creativity in their dental commissioning arrangements [5].
The 2007 Annual Health Check undertaken by the Healthcare Commission in England, revealed that almost forty percent of PCTs scored 'fair' or 'weak' in their use of resources [6] and in relation to dental services the need for improved guidance on best practice has been highlighted [5]. The introduction of a vision for World Class Commissioning [7] together with the reforms led by Lord Darzi [8], firmly placed PCTs at the time, as the 'leaders of the local NHS'. As a consequence, this study proposal was designed around two PCTs who effectively manage local NHS dental services as one commissioning organisation, and a large secondary care teaching dental hospital.
Primary dental care appeared as a national priority in the Operating Framework for the NHS in England 2009/10 [9] and the Framework referred to improving NHS dental services in a number of key areas: 'While progress in some priorities is commendable, a lot more needs to be done to improve access to dentistry, as well as the quality of care and oral health in the community' [9].
Research conducted by the authors has highlighted a lack of clarity with regard to how some PCTs structure their commissioning processes in order to ensure the efficient use of finite resources for NHS dentistry [10,11]. Indeed, the earlier Operating Framework for the NHS in England 2008/09 similarly made specific reference to the commissioning of dental services: 'PCTs need to ensure robust commissioning strategies for primary dental services, based on assessments of local needs and with the objective of ensuring yearon-year improvements in the number of patients accessing NHS dental services' [12].
The recent Government reviews and publications on NHS dental services together with the research team's earlier studies [10,11], collectively suggest that PCTs may benefit from further guidance and support in their dental commissioning responsibilities. As a consequence, these organisations might then be supported to use finite resources more efficiently and equitably. Resources do not simply refer to financial inputs but they include consideration of staffing and workforce requirements. In a recent 2010 'local commissioning survey' conducted by the British Dental Association, it was reported that seventy-four percent of dental commissioning leads in PCTs across England felt they needed additional support in their dental commissioning teams [13]. Eighty-one PCTs participated in the survey which equates to a 53% response rate.
The 2009/10 NHS Operating Framework in England placed emphasis upon a need to review dental commissioning strategies in order that transparent and open procurement processes exist [9] and the current 2011/ 12 Operating Framework also calls upon PCTs to commission improvements in access to NHS dentistry and improve efficiency through effective contract management [14]. In response to the collective concerns regarding the local commissioning of NHS dental services, the research team propose a pragmatic economics-based approach to structure the dental commissioning process. The rationale behind proposing this approach is to expressly consider the key economic principles 'opportunity cost' and 'the margin' to determine whether resources for dentistry may be re-allocated (within the service) to maximise efficiency.
There are several economic approaches that could be applied in the study (for example cost-benefit analysis CBA, cost-effectiveness analysis CEA, and cost-utility analysis CUA). However, research has highlighted challenges to their application in practice [15][16][17][18][19][20][21][22][23]. A key issue for healthcare decision-makers is that technically sound health economics methods often cannot reflect the driving complexities of the commissioning process [24]. Programme budgeting and marginal analysis (PBMA) draws upon the same theoretical principles as the economic analyses listed above. However, it is less constrained by having to pre-define the measure of outcome to be used and, indeed, would permit multiple outcomes into the evaluation framework. Thus, it also provides a more flexible framework which can be applied to existing commissioning processes, akin to the use of traditional decision analytic approaches [25]. PBMA has been applied successfully in Australia where the process allowed immediate decisions to be made regarding the local priorities for dentistry [26]. The application of PBMA in other health systems has similarly shown positive impacts to priority setting and in the allocation of scarce health care resources [27].
Aim of the study
The aim of this study is to develop, apply and evaluate an original economics-based framework built upon a PBMA approach, to assist commissioners in their management of finite resources for dental services. The study will draw upon key economic principles associated with PBMA to inform and guide commissioners in their delivery of appropriate dental services which are tailored to the needs of local populations.
Action Research Question
How can health economics improve the commissioning of NHS dental services for the benefit of patients and local populations?
Methods/Design
We propose an action research approach [28] which will apply mixed methods to record the opinions, successes and challenges facing stakeholders involved with decision-making for NHS dental services in both primary and secondary care settings in northern England. The study will also document the stages involved in the application of a pragmatic economics-based framework with which to structure the decision-making process. The rationale for applying action research as a scientific approach is threefold. First, to involve NHS staff and other participants as 'co-researchers' working closely with the research study (rather than simply doing research on them), second, to identify organisational factors (e.g. management structures and workforce) within the PCTs involved that may impact upon the commissioning of local oral health care services, and third, to determine whether action research can bring about a degree of change within these NHS organisations [29] as a result of using health economics in the commissioning process.
The importance of participant involvement in the research process was identified in a NHS Health Technology Assessment published in 2001 which included a systematic review relating to action research [30]. The subsequent report acknowledged the importance of participant involvement in the research process and recognised how this approach may increase active engagement of users in NHS services. Of direct relevance to our proposed study, the report suggested that action research could be used for the development of knowledge and understanding in relation to informed decision-making [30].
Data collection
Data generated by this study will be collected in the field by one member of the research team (RH). Initially, the study will explore the views of stakeholders regarding issues facing NHS dental services from the perspectives of each group as an in-depth scoping exercise. A combination of semi-structured interviews with PCT staff and NHS dentists together with focus groups comprising service-users, will document participants' responses according to a pre-piloted topic guide. It is estimated that approximately twenty semi-structured interviews with NHS professionals and up to four focus groups with service-users is likely to achieve data saturation in the settings identified. Each interview and focus group will follow a topic guide which will be modified and updated in response to feedback received from participants throughout the research study. It is anticipated that each interview and focus group will last approximately 60 minutes. Advisory panel meetings are a core component of PBMA approaches and these will comprise up to three representatives from each stakeholder group. A number of these panel meetings will be convened during the study in order to agree local priorities for NHS dental services, to finalise the prioritisation criteria for use in the decision-making process and to consider relevant data sources with which to inform potential resource reallocation within the existing dental budget. Figure 1 outlines an anticipated order of the research methods and stages in the nominated primary and secondary care settings. The diagram also outlines the key stages involved in a generic programme budgeting and marginal analysis exercise and is based upon earlier work published by one of the research team (CD) [31].
Following a baseline qualitative assessment of current dental issues facing stakeholders, two inter-related PBMA case-studies will be conducted -the first will be based in primary dental care and the second in a secondary care (dental teaching hospital) environment. Although the details of each process will be specific to each case-study, the generic process for each would be as follows. A detailed map of current dental expenditure within the NHS organisations will be produced by the Principal Investigator working with the PCT and hospital, and this will be distributed to all participant groups for comment. This will form the Programme Budget (PB) and it will aim to clarify how resources are currently spent on NHS dentistry. The study will them move into the Marginal Analysis (MA) phase whereby resources are considered for reallocation in order to maximise oral health 'per pound spent'. During the initial stages of the research study, all participants will be asked to identify local priorities for investment and disinvestment in NHS dental services before decision-making criteria are agreed and weighted by each group. All NHS professionals will be invited to nominate areas within the local dental budget for investment and disinvestment through the use of an anonymous, customised postal questionnaire. Service-users will undertake the same process within the baseline focus group meetings.
A number of dental business cases (or proposals) will be agreed by the advisory panel which are based upon the emergent priorities for local NHS dental services and a decision-making process using PBMA will be operationalised in order to rank the proposals according to the decision-making (or 'prioritisation') criteria.
Throughout this process, the P.I. will act as a facilitator at advisory panel meetings and will work alongside fellow participants within the multiple action research cycles of 'plan', 'act', observe' and 'reflect'. Additional focus groups and semi-structured interviews will document participants' experiences of their involvement with the dental decision-making process. Focus groups and semi-structured interviews will be digitally audiorecorded and professionally transcribed verbatim for subsequent analysis. Written questionnaires and voting forms will be devised to both weight and rank the proposed dental programs under consideration, alongside the agreed prioritisation criteria. Data analysis will therefore include both qualitative and simple quantitative techniques.
Participants will consider written evidence (both clinical and financial) in support of each proposal. The data and inputs required to inform the decision-making process will be decided by the panel. Examples may include summaries of clinical dental guidelines, research papers detailing the clinical effectiveness of interventions and the associated costs and benefits of the business cases under consideration. For each proposal, the facilitator will prepare a 'panel approved' business pro forma. The pro forma will include sufficient detail to enable all members of the advisory panel to ultimately award a score against the agreed prioritisation criteria. Advisory panel meetings will encourage discussion amongst the group, they will provide an opportunity to consider additional scientific evidence and they will directly involve representatives from each of the three stakeholder groups in a local priority setting and resource allocation exercise.
Study sample
The study will use a purposive sampling strategy to ensure that the stakeholder groups' views are represented in the data. Service-user representation will be sought by formally approaching the Chair of the PCT's Local Involvement Network (LINk) inviting members to participate in the study. Dental practitioners will first be identified and approached via the PCT's Dental Practice Advisor (DPA) who will also contact the Chair of the Local Dental Committee (LDC). PCT staff will include dental commissioners and public health practitioners who will be contacted via letter directly by the PI inviting them to participate.
Inclusion criteria comprise adults over the age of 18 years who live and/or work within the geographic area of the selected NHS organisations and who would fall into one of the three stakeholder groups (local serviceuser/NHS dentist/PCT staff). Exclusion criteria include children and teenagers aged 17 years and below, participants unable to consent for themselves and adults who do not speak English.
As is common in qualitative research, the final sample size will be determined by the need to achieve data saturation [32]. However, as a guide in the planning phase, and based upon the research team's earlier work, it is envisaged that the study will recruit approximately fifty participants in total from the three stakeholder groups identified in both the primary and secondary care-based case studies An overarching 'advisory panel' will be convened at an early stage during the research study. This panel will provide balanced representation from each stakeholder group. Although this is a commonly-recognised step in the PBMA approach, it is common for multi-disciplinary and multi-functional groups to be set up within health organisations to review service in specific areas (in this case, dentistry). In this sense, PBMA merely seeks to build on what already happens in such organisations.
Data analysis and interpretation
Data collection and analysis will occur concurrently using the constant comparative method [33] in order to incorporate the responses of participants into topic guides. Professionally produced audio transcripts from the recorded semi-structured interviews and focus groups will be returned to the P.I. for qualitative analysis. Each transcript will be labelled with a unique participant identifier to ensure that the identity of participants remains confidential. Thematic analysis will be used throughout and the data will be managed manually using a framework approach [34]. The validity of data interpretation will be strengthened through independent coding and analysis by at least two members of the research team (RH and CE). Regular feedback to participants of the results generated to date, will attempt to ensure that the main themes and findings are interpreted and reported accurately. The facilitator will use the beginning of each focus group and advisory panel meeting to present the emergent themes and outcomes generated by earlier sessions. Participants will then be asked for their views and comments in order to verify that the data and meeting outcomes are a true record. This reciprocity is an inherent component of an ethical action research approach [35].
The research team will adhere to COREQ (Consolidated criteria for reporting qualitative research) criteria [36] for reporting qualitative research in papers which arise from this study. COREQ comprises a 32-item checklist to assist researchers in their reporting of study parameters. The three domains which form the COREQ checklist include: the research team and reflexivity; study design; and analysis and findings [36].
Ethical approval
The study has approval from County Durham and Tees Valley 2 Research Ethics Committee [Ref: 10/H0908/9] and NHS Research Governance approval from the participating NHS organisations involved in primary care. Further ethical reviews will be submitted as 'substantial amendments' as the study evolves in response to the views of participants. The principal investigator (RH) holds an honorary NHS contract and is the only member of the research team to have direct access to patients and NHS staff in this study.
Participant data will be stored confidentially by the principal investigator in accordance with the Data Protection Act 1998 and local NHS protocols. Written consent will be taken from each participant on enrolment and a unique identifier code will protect each participant's anonymity alongside published verbatim quotes taken from the transcripts.
Management for the recruitment of service-users will be devolved to the Chair or organiser of the local engagement groups (e.g. LINks). This will mean that the research team do not need to contact members of the public directly (thus reducing any potential for inadvertent coercion), nor will the research team need to store personal data such as the home addresses of members of the public.
Study limitations
The research is based upon a series of in-depth qualitative case studies in primary and secondary care NHS organisations in northern England. The organisational structure within each setting may contain unique aspects from which it may not be possible to generalise to other NHS settings in England. However, in defence of the action research and qualitative approaches selected, it was considered appropriate to ground the study firmly within existing NHS organisations in order to explore and document the complexities surrounding dental priority setting and decision-making. The recent government Spending Review and the White Paper 'Equity and Excellence: Liberating the NHS' has announced the proposal to abolish PCTs from April 2013 [37]. This has led to demonstrable flux and the initiation of transitional arrangements within the PCTs identified. As a consequence, the level of engagement with the study by time-constrained PCT staff is likely to be a real challenge for the research team. In light of workforce cuts already evident within these PCTs, the researchers will endeavour to fit the study around existing dental business to reduce any additional burden upon participants. Similarly, the research team will try to ensure that panel meetings do not always occur during normal business hours so that NHS dentists are not prevented from conducting their normal clinical duties. Despite government plans to abolish PCTs, the commissioning of NHS dental services at local or regional levels is almost certain to continue within the context of resource scarcity. This study will first explore the current status of dental commissioning within the NHS organisations involved and it will investigate how PBMA may act as a framework to structure decision-making processes.
NHS dental services are arguably in need of a range of new clinical outcome measures with which to measure oral health improvement across local populations. Within a resource scarce environment decisions still need to be made. New dental business cases prioritised for implementation as a result of this study may require several years for their clinical effects to be observed in the local population. This time delay will mean that the impact of service changes or preventive or clinical interventions will require detailed follow up over a number of years after the end of this study.
Discussion
The protocol outlines a study which is of direct and immediate relevance to patients, the public, health professionals and commissioners of NHS dental services. With almost £3 billion of public expenditure currently spent on NHS dental services in England and widespread criticism regarding the variability of dental commissioning, it is timely for research to investigate how we may improve the process in order to use scarce resources more efficiently. The key issue relates to whether we can further maximise the oral health of populations with the resources currently available to local NHS commissioning organisations. In order to improve oral health (and indirectly the general health) of populations, one research direction may be to propose a move away from historic funding allocations, to an oral health service which is built upon local oral health needs and with the combined views and priorities of stakeholders included in the commissioning process. We propose that at the heart of this process should be the recognition of resource scarcity and that managing health needs requires decisions to be made within existing constraints.
Our earlier research conducted as a precursor to this study suggested that there is a real potential for PCTs to be managing dental resources sub-optimally [11]. For example, in NHS dentistry it is known that historically, money has followed activity (dental treatment), not patients' needs [38]. A pragmatic economics-based approach built upon the principles of PBMA may inform and improve our current commissioning arrangements and assist in the realignment of resources to benefit patients and the public.
This research is timely as it seeks to explore the complexities and processes associated with priority setting and resource allocation in order to maximise oral health 'per pound spent'. The study will provide analysis of the complexities involved with decision-making for NHS dental services and determine the value of an economics-based framework with which to structure the commissioning process.
The study aims to address a fundamental questionhow best to allocate finite resources for NHS dentistry at local levels? Through the use of action research as an inclusive research approach, the study will seek to involve stakeholders through a series of advisory panel meetings, focus groups and semi-structured interviews in order to weight a number of agreed prioritisation criteria which will be used to score submitted business proposals for developing local NHS dental services. The findings from the study will be important for policy makers to consider when assessing the structure of new commissioning organisations linked to the next NHS dental contract in England and Wales which is anticipated in 2014. | 2014-10-01T00:00:00.000Z | 2011-05-31T00:00:00.000 | {
"year": 2011,
"sha1": "cf4e79e7e1b16c7d18bf9119ae1922f7c7e504f2",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-11-138",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74a1bd0ffb6cfb3001b6e5d521836413a3d97941",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202556322 | pes2o/s2orc | v3-fos-license | Synthesis of Pluri-Functional Amine Hardeners from Bio-Based Aromatic Aldehydes for Epoxy Amine Thermosets
Most of the current amine hardeners are petro-sourced and only a few studies have focused on the research of bio-based substitutes. Hence, in an eco-friendly context, our team proposed the design of bio-based amine monomers with aromatic structures. This work described the use of the reductive amination with imine intermediate in order to obtain bio-based pluri-functional amines exhibiting low viscosity. The effect of the nature of initial aldehyde reactant on the hardener properties was studied, as well as the reaction conditions. Then, these pluri-functional amines were added to petro-sourced (diglycidyl ether of bisphenol A, DGEBA) or bio-based (diglycidyl ether of vanillin alcohol, DGEVA) epoxy monomers to form thermosets by step growth polymerization. Due to their low viscosity, the epoxy-amine mixtures were easily homogenized and cured more rapidly compared to the use of more viscous hardeners (<0.6 Pa s at 22 °C). After curing, the thermo-mechanical properties of the epoxy thermosets were determined and compared. The isophthalatetetramine (IPTA) hardener, with a higher number of amine active H, led to thermosets with higher thermo-mechanical properties (glass transition temperatures (Tg and Tα) were around 95 °C for DGEBA-based thermosets against 60 °C for DGEVA-based thermosets) than materials from benzylamine (BDA) or furfurylamine (FDA) that contained less active hydrogens (Tg and Tα around 77 °C for DGEBA-based thermosets and Tg and Tα around 45 °C for DGEVA-based thermosets). By comparing to industrial hardener references, IPTA possesses six active hydrogens which obtain high cross-linked systems, similar to industrial references, and longer molecular length due to the presence of two alkyl chains, leading respectively to high mechanical strength with lower Tg.
Introduction
Amine is one of the most important functional groups in the chemical industry, highly present in various industrial fields, such as pharmaceuticals [1][2][3], agrochemicals [4,5], detergents [6,7], lubricants [8] and polymer industry [9,10]. Amines act as intermediates in the synthesis of different polymers including phenolic resins [11,12], polyimides [13], polyureas, polyurethane [14] and poly(hydroxy)urethanes [15], polyamide [16,17] and epoxy thermosets [18,19]. Our team has a long experience in the synthesis of epoxy thermosets [10,20] and has recently developed new efficient bio-based amine hardeners exhibiting high reactivity, by a direct amination method of epoxy monomers using an aqueous ammonia solution [21]. Due to the presence of many hydrogen bond sites on their structures, the obtained β-hydroxylamine hardeners showed high reactivity but also high viscosity. Therefore, we worked on an alternative pathway to synthesize bio-based pluri-functional amine monomers, avoiding the formation of hydroxyl functions, in order to disfavor the hydrogen bonds effect and thus decrease viscosity. Hence, we synthesized amines from imine reduction, more generally called reductive amination. This methodology allows easily obtaining secondary or tertiary amine monomers with high yields and high reactivity [22][23][24]. Using aromatic aldehydes, amines monomers containing aromatic moieties can be synthesized from aliphatic amines which are less toxic than conventional aromatic amines. Moreover, benzyl amine monomers are much more reactive than aromatic ones. Aromaticity is really interesting in epoxy-amine formulation to improve the miscibility of amines with epoxy monomers, which are most of the time aromatic substances. Moreover, providing aromaticity tunes the hydrogen equivalent weight (HEW), decreases the volatility of amines and induces high thermo-mechanical properties for the final thermosets. Hence, such aromatic amines could be very highly desirable for epoxy thermosets synthesis.
Moreover, the imine reduction is a really simple method, using a variety of hydride source reducing agents [25,26]. The reduction step can also be performed by the aminocatalytic method using a catalyst [27], by one-step bio-catalyzed transformation using enzymes [28][29][30] or by the adaptation of the Haber-Bosch method used for the synthesis of ammonia [31]. This method involves the metal-catalyzed hydrogenation of unsaturated bonds as in the case of imine functions [32][33][34][35][36]. For instance, Jia et al. described the development of the ruthenium-catalyzed hydrogenation method for the reduction of imine intermediates to obtain amine monomers [37]. Exposito et al. recently developed the imine Pd-catalyzed hydrogenation using the industrial continuous-flow process, thanks to a Pd/C support, which was stable over multiple reaction-regeneration cycles [38].
For epoxy-amine thermosets formulation, the hardener requires a structure with at least three active hydrogens to form a cross-linked network when using a conventional diepoxy prepolymer, such as diglycidyl ether of bisphenol A (DGEBA). Most of the amine hardeners are petro-based diamines (four active H) such as 4,4 -Methylenebis(cyclohexylamine) (PACM) [39,40] and Triethyleneglycol diamine (EDR 148) [41,42], and there are only a handful of bio-based diamines, such as 1,6-hexamethylenediamine or 1,9-nonanediamine which can be obtained from renewable resources [43], 1,10-decanediamine from alcohols [44] and the diamine developed by Garrison et al. from renewable terpenoids [45]. However, beyond this functionality, very few polyfunctional amines are reported, and most of the time they are petro-based [10], such as Jeffamine T403, which exhibits a low reactivity [46]. Only a few poly-functional amine hardeners are bio-based, such as citric amido amine described by Bähr et al., which performed the aminolysis of citric acid to obtain a tri-functional primary amine [47]. Pluri-functional cardanol-based amines can also be obtained with formaldehyde and aliphatic di-or triamines to obtain the corresponding Mannich hardeners [48].
The reductive amination could be a solution to design polyfunctional amine hardeners since this is an easy method to tune and modify already existing amines. However, the imine reduction generally leads to secondary mono-or diamines as intermediates in pharmaceutical applications [49][50][51], except when using ammonia as the initial amine reactant [52,53], reaching primary amines which can be efficient hardeners [54,55]. This process was used for producing diamine monomers, which are useful as starting materials for various polyamides and polyurethanes but not for epoxy thermosets. Moreover, only a few amine monomers from this reductive amination method are bio-based such as 2,5-bis(aminomethyl)furan developed in 2015 by Le et al. [10,56,57]. Thanks to reductive amination, another possibility to synthesize hardeners is to slowly add an aldehyde in an excess of primary diamine monomers. Only a few examples are described in the literature. For instance, Micklitsch et al. synthesized an amine monomer exhibiting one secondary and one primary amine functions from ethylenediamine and benzaldehyde in the presence of NaBH 4 [58]. In 2012, Milelli et al. described the synthesis of a naphthalene diimide derivative, obtained from the addition of reduced benzylic imine on 1,4,5,8-naphthalenetetracarboxylic dianhydride monomer [59]. To the best of our knowledge, the reductive amination, thanks to imine formation, is hardly described in the literature for epoxy-amine applications. Only Kasemi et al. patented hardeners using aldehyde monomers and principally m-xylylenediamine (MXDA) or some derivatives of 1,2-propylenediamine as amine without any concerns about the biomass origin and reactant toxicity [60]. Moreover, this patent is very broad and imprecise and does not study the properties of hardeners.
Hence, our work proposes for the first time a reductive amination method for the synthesis of pluri-functional amine hardeners containing aromatic moieties, between three to six active H, forming a cross-linked system using diepoxy monomers. Moreover, this method respects green chemistry concepts due to the use of monomers that are or could be obtained from biomass and with a low toxicity. We have studied the impact of reaction conditions on the final amine functionality. Then, we have evaluated the impact of phenyl and furan moieties on the thermal resistance of monomers [61,62]. Finally, hardener properties were studied with the synthesis and the characterization of epoxy-amine thermosets.
Characterization Techniques
1 H and 13 C NMR analyses were recorded on a 400 MHz Bruker Aspect NMR spectrometer at 23 • C (Rheinstetten, Germany) in deuterated solvents. Tetramethylsilane was used as a reference to the chemical shifts, which are given in parts per million (ppm).
Viscosities measurements were performed at 22 • C on the AR-1000 rheometer (TA Instruments, New Castle, DE, USA). A 60 mm diameter and 2 • cone-plan geometry was used. The flow mode was used with a gradient from 1 to 0.01 rad·s −1 .
Fourier-transform infrared (FTIR) spectra were recorded using a Thermo Scientific Nicolet 6700 FTIR spectrometer with "diamond ATR" equipment (Waltham, MA, USA) in transmittance and with a band accuracy of 4 cm −1 . Additionally, 32 scans were performed in the range of 4000-650 cm −1 and with a resolution of 4. OMNIC software was used.
Thermogravimetric analyses (TGA) were recorded using a Netzsch F1-Libra analyzer (Selb, Germany) at a heating rate of 20 • C·min −1 from 25 to 600 • C (nitrogen stream). Each sample was placed in an alumina crucible and contained an amount between 9-10 mg. The moisture and volatile content, the percentage of residue at 600 • C, and the degradation temperature (T d ) were determined thanks to TGA analyses.
Differential scanning calorimetry (DSC) measurements were performed with the use of a Netzsch DSC200F3 calorimeter F3 (Selb, Germany, indium calibration, nitrogen stream). Pierced aluminum pans were used as crucibles with approximately 10 mg of the sample. A heating rate of 20 • C·min −1 from −100 • C to 120 • C was used to record the glass transition temperature (T g ). Right values were measured on a second heating ramp.
Dynamic mechanical analyses (DMA) were performed on Metravib DMA 25 with Dynatest 6.8 software (TA Instruments, New Castle, DE, USA). Uniaxial stretching of samples was carried out while heating at a rate of 3 • C·min −1 using a constant frequency of 1 Hz and a fixed strain (according to sample elastic domain). For T g around 60 • C, DMA analyses were performed from -30 • C to + 160 • C. For T g around 90 • C, DMA analyses were performed from 0 • C to +190 • C.
Cross-linking density: From rubber elasticity theory [63], uniaxial stretching was studied on the rubbery plateau at T = T α +80 • C, and at very small deformations. According to these hypotheses, the cross-linking density (ν ), was determined from Equation (1), where E is the storage modulus, R is the universal gas constant and T α is the vitreous transition temperature, in K. Calculated values are given for informational purposes only, and they can only be compared.
Swelling indices (SI) were measured with a sample of approximately 25 mg, which was placed in 25 mL of tetrahydrofuran (THF) for 24 h. This step was repeat three times for repeatability. The swelling index was calculated according to Equation (2), where m 1 is the mass of the material after swelling in THF during 24 h and m 2 is the initial mass of the material.
Gel contents (GC) were measured after SI samples were dried in a ventilated oven at 70 • C for 24 h. The gel content was calculated according to Equation (3), where m 3 is the mass of the material after drying and m 2 is the initial mass of the material.
Synthesis of Amine Hardeners
First, 100 mL of H 2 O-2-MeTHF mixture (70-30) were added to DYTEK ® A (17.3 g, 149 mmol, 10 equivalents) in a 250 mL two-neck round-bottom flask. Then, isophthalaldehyde (2 g, 14.9 mmol, 1 equivalent) was solubilized in 30 mL of 2-MeTHF and then added dropwise using a dropping funnel. The reaction crude was stirred and heated until complete aldehyde conversion, at 110 • C. The solution was then cooled down to room temperature.
In the case of monoaldehyde use, only 5 equivalents of DYTEK ® A were used for 1 equivalent of monoaldehyde.
In the case of one pot conditions for isophthalatetetramine (IPTA)2 synthesis: Then, 100 mL of H 2 O-2-MeTHF mixture (70-30) were added to DYTEK ® A (17.3 g, 149 mmol, 10 equivalents) in a 250 mL round-bottom flask. Then, isophthalaldehyde (2 g, 14.9 mmol, 1 equivalent), was added. The reaction crude was stirred and heated until complete aldehyde conversion, at 110 • C. The solution was then cooled down to room temperature.
For each case, 2 equivalents of sodium borohydride were then added slowly to the theoretical amount of imine and solvent mixture, and the reaction crude was heated at reflux temperature until complete disappearance of imine signal in 1 H NMR. The solvent was removed under reduced pressure. The sodium borohydride was then neutralized by the addition of the reaction crude in water (200 mL). This aqueous solution was extracted with AcOEt (3 × 600 mL). Organic phase was washed with brine solution (400 mL). Then, organic phase was dried with MgSO 4 , filtered and the solvent was removed under reduced pressure. Any trace of DYTEK ® A was removed by distillation.
All the described structures corresponded to the attack of each amine group from the diamine compound onto dialdehyde.
Amine Hydrogen Equivalent Weight (AHEW or HEW) Calculation
Each experimental HEW was determined by NMR 1 H titration using benzophenone as the intern reference. To this end, known weights of amine and benzophenone were poured into an NMR tube and 500 µL of deuterated chloroform were added. HEW values were determined according to Equation (4). where: PhCOPh : integration of the benzophenone protons;
Synthesis of Epoxy Thermosets
The amount of hardener for 100 g of epoxy for a theoretical molar ratio of 1:2 between amine and epoxy functions was calculated according to Equations (5) and (6): where AHEW (or HEW) is the amine hydrogen equivalent weight and EEW is the epoxy equivalent weight. The optimal molar ratio was then determined with the adjustment of Equation (6) by multiplying the hardener mass by various ratio of amine/epoxy. Then, the optimal molar ratio, corresponding to the highest T g , was determined by recording the T g s using Differential Scanning Calorimetry (DSC) analysis.
Reactants were then mixed according to the previously determined optimal molar ratio and cured at 80 • C for 2 h to obtain the thermosets.
Results and Discussion
The aim of this study was to synthesize new pluri-functional and bio-based aminated hardeners containing aromatic moieties. To this end, the reductive amination method was applied to bio-based and non-toxic aromatic aldehydes with a non-toxic diamine (Figure 7). This method brings aromatic moieties, and thus to increase the thermo-mechanical properties of the final material, without having to synthesize aromatic amines which are generally toxic. The pluri-functionality of amine was easily studied by changing the active H number with the selection of the structure of the aldehyde monomer and the reaction conditions. In this study, we studied the influence of the modification of the active H functionality on the thermo-mechanical properties. The use of dialdehyde monomers increases the active H functionality of the final hardener compared to the initial diamine (respectively six active H versus four), while monoaldehyde leads to three active H. In this view, isophthalaldehyde (IPA) was chosen as the dialdehyde monomer and benzaldehyde as the monoaldehyde reference. All previously cited aldehydes can be produced from biomass and are non-toxic [64][65][66][67][68]. Furfuraldehyde was also chosen, despite its toxicity, in order to compare the final thermoset properties with aromatic ones [69]. Due to its liquid aspect and its presence in the REACH (registration evaluation authorisation and restriction of chemicals) registration list, 2-methylpentane-1,5-diamine (DYTEK ® A) was chosen as the amine reactant [70]. The presence of branched methyl in its structure decreases the viscosity thanks to the steric hindrance, providing a liquid aspect to the reactant. Moreover DYTEK ® A could be synthesized by the methylation of natural glutamine [71,72].
Hardeners Synthesis
Each aldehyde reactant was respectively added dropwise, in an excess of initial amine reactant, in order to avoid the oligomer formation. To simplify the procedure, we chose NaBH 4 as a reducing agent for this study. However, this reducing step can be easily performed with an industrial process such as catalytic hydrogenation [31]. Only IPA-based amine was also synthesized in one-pot conditions, by mixing entirely aldehyde at the same time as amine, to favor the dimerization (i.e., in order to obtain a higher functionality). The aims of this other one-pot method was to study the influence of the increase of the active hydrogen functionality to six on the network properties. All amine characterizations are summarized in Table 1. The reductive amination method obtains full conversion of aldehyde during the imine synthesis step and then, full conversion of imine during the second step (spectra given in Supplementary Materials, parts 1 to 3). All amine hardeners from IPA, benzaldehyde and furfural were successfully synthesized.
The imines synthesis and their reduction may be easily monitored by the appearance and disappearance of the imine signal from both FTIR and 1 H NMR spectra (Figure 8). For instance, in the case of IPA-based hardener synthesis (IPTA), the reduction of imine moieties to amine functions may be observed with the disappearance of C = N stretching band at 1643 cm −1 . The 1 H NMR spectrum changes considerably after the reduction step. The disappearance of the -CH signal corresponding to the imine proton at 8.25 ppm is observed and then confirmed by the appearance of a singlet signal at 3.77 ppm, corresponding to -CH 2 signal of the reduced imine function. Moreover, the disappearance of the -CH 2 signal corresponding to the protons in α position of C = N bond designated as 5 and 5 is shown in Figure 8. The reduction step induces the absence of conjugated system involving the imine bond. Consequently, the aromatic signals of the formed amines are shifted from the area 8.00-7.36 to the area 7.27-7.18 ppm. 1 H NMR spectra of the three synthesized amines from IPA (named IPTA1), benzaldehyde (named BDA) and furfural (named FDA) are displayed in Figure 9. The IPA-based hardener synthesized in one-pot conditions (IPTA2) shows a similar spectrum to IPTA1, with different integration values (spectra given in Supplementary Materials, part 1c). The signal corresponding to the Cq Ar -CH 2 protons in α position of secondary amine appears at 3.75 ppm with a singlet (designated as 1 in Figure 9). Then, the signals of the other α-CH 2 of the secondary and primary amine moieties are overlapped at 2.51 ppm (designated as 2 in Figure 9). Moreover, the signals of the amine from the A addition are different from the signals of the amine from the B addition but are found in the same overlapped signal. Thus, there are at least four signals overlapped at 2.51 ppm. Due to this overlap, the A and B addition ratio could not have been determined thanks to the 1 H NMR spectra of the final amine monomers. However, before the reduction step, the α-CH 2 of the imine signal shifted to 3.50 ppm due to the conjugated system involving the imine bond. This signal is split into two different signals for A and B additions, determining the proportion of each other (spectra given in Supplementary Materials, parts 1 to 3). The characterizations of the new bio-based amine hardeners named IPTA1, IPTA2, BDA and FDA are summarized in Table 1 (DSC and TGA thermograms are given in Supplementary Materials, parts 4 and 5). A and B additions were obtained in similar proportions, with an average ratio of 50:50 (determined using the 1 H NMR spectrum of imine intermediates). The experimental HEW was determined by NMR titration, as reported in the Materials and Methods section. Experimental and theoretical results were almost similar. The higher glass transition temperature (T g ) of IPTA2 compared to IPTA1 confirms the presence of more dimers in the IPTA2 hardener than in the IPTA1. The viscosity value follows the same trend with a higher value for IPTA2 than IPTA1. Due to the higher content of aromatic moieties in dimeric structure and so in IPTA2 hardener, T g and T d 5% of IPTA2 are higher than IPTA1 monomeric structure. BDA and FDA both exhibit similar T g and T d 5% values, showing similar behaviors for their thermomechanical properties. IPTA1, BDA and FDA exhibit a liquid aspect with low viscosities lower than 0.6 Pa s at 22 • C close to the water aspect, which are interesting properties for epoxy-amine formulations. The comparison of IPTA1 and IPTA2 hardeners shows an increase of viscosity with the number of dimers.
Thermoset Syntheses
Synthesized amines were then used to synthesize epoxy thermosets (also named P-materials) with different epoxy monomers. Hence, bulk materials (parallelepiped shape) were obtained by the curing of the synthesized amines with epoxy monomers, using previously-determined optimal ratios (method described in the Experimental section, DSC thermograms and optimal ratio given in Supplementary Materials, parts 6 and 7). First, hardeners were reacted with diglycidyl ether of bisphenol a (DGEBA), as a petro-sourced epoxy reference. These thermosets can be compared to literature results describing the networks' characteristics of MXDA-and DYTEK ® A-based materials from DGEBA as an epoxy part [73][74][75]. m-Xylylenediamine (MXDA) is a petro-sourced arylamine hardener currently used in the industrial field of epoxy coatings due to its high reactivity [76]. It is interesting to compare MXDA and IPTA structures due to their similar dibenzyl center. We could observe the influence of the aliphatic chain addition provided by DYTEK ® A using DYTEK ® A-DGEBA thermoset results as data references. In the same way, thermosets (also named bio-materials) were then synthesized using the diglycidyl ether of vanillin alcohol (DGEVA) [77] as a bio-based epoxy derived from vanillin in order to increase the bio-based carbon content. Then, thermo-mechanical properties and chemical resistance in THF of each optimal bulk network are determined and summarized in Table 2. Epoxy and amine reactants were mixed and then cured at 80 • C for 2 h, with reactant amounts corresponding to the respective optimal ratios. The end of the crosslinking reaction was confirmed by DSC analyses, with no residual enthalpy signal on each thermogram. Furthermore, high-gel content values (>90%), corresponding to a highly cross-linked material, confirmed the full conversion for each thermoset.
The thermal stabilities were determined using TGA under nitrogen steam. The 5% weight loss (T d 5%) temperature and the char yields at 600 • C were recorded ( Figure 10). By comparing IPTA1-based and IPTA2-based materials, results followed the same trend exhibiting similar T d 5% and char yield values. Furthermore, the P-materials showed higher thermal resistance with slightly higher T d 5% values around 350 • C against 315 • C for bio-materials, keeping however good thermal resistance. On the contrary, a higher char yield at 600 • C was observed for bio-based materials, meaning higher thermal resistance for high temperatures. This can be explained by the absence of the geminal dimethyl bridge on the bio-based epoxy structure, which has a low thermal stability. TGA results concluded that the slightly molecular weight difference between IPTA1 and IPTA2 has no impact on thermal stability. IPTA-based materials showed slightly higher char yields than BDA-based material with a residual mass of 3%-4% higher. FDA-based materials exhibited the highest char yield, meaning that furan moieties showed higher thermal resistance than benzyl moieties. All P-materials showed a higher thermal stability than MXDA-DGEBA material (MXDA-ref). The glass transition temperature values (T g ) were recorded by DSC and then compared to the alpha transition temperature values (T α ), which were determined by DMA analyses, corresponding to the mechanical manifestation of T g (DMA in Figure 11, DSC thermograms given in Supplementary Materials, part 8). The transition from vitreous state to rubbery state induces a module loss, and thus the maximum value of the tan δ curve as a function of the temperature corresponds to the T α . Results showed that T g s and T α s followed the same trend for each thermoset with each of narrowed tan δ peaks, suggesting that the materials are homogeneous. Overall, fully bio-based thermosets exhibited a lower T g value than the P-material references, with a difference of 30 to 40 • C. This decrease of T g value is due to the presence of methylene and methoxy moieties in DGEVA structure, which behaved as spacers and repulse polymeric chains for each other, thereby providing flexibility and thus lower T g . DSC results showed a difference of 10 • C between IPTA1-and IPTA2-based thermosets in favor to IPTA1-based ones due to the reduced molecule weight for the thermosets from IPTA1, that displayed shorter backbones than IPTA2. Then, the IPTA-based networks were compared to BD-P reference. BD-P is based from BDA which is a hardener with only three active hydrogen functions (one secondary and one primary amine) while IPTA-based hardeners show similar backbone structure containing at least two alkyl chains with six active hydrogens (two secondary and two primary amines). Due to the presence of more alkyl chains containing -NH functions in IPTA structure, the aromatic moieties were directly incorporated in the polymeric chain compared to the BDA structure, which induced alkyl polymeric chains with dandling aromatic moieties ( Figure 12). Moreover, the six -NH reactive functions of IPTA increased the cross-linking density (ν ) values compared to BDA (respectively 1311 and 460 mol·m −3 for P-materials, and 997 and 659 mol·m −3 for bio-materials). Then, BDA-based and FDA-based thermosets exhibited similar T g and T α , without any distinction between furan and benzyl moieties. However, the cross-linking density values showed that thermosets from furan have more compact networks than benzyl-based thermosets (with respectively 460 and 123 mol·m −3 for P-materials, and 407 and 91 mol·m −3 for bio-materials). Moreover, those results are confirmed by the swelling index values, which are inversely proportional to the cross-linking density. A compact network involves a lower solvent ingress and thus a lower swelling index. Overall, the comparison of T g and T α for these four networks concluded that the higher the amount of active hydrogens, the higher T g and T α . However, by comparing IPT1-P, IPT2-P and BD-P with DYTEK ® A-ref and MXDA-ref, the decrease of T g and T α could be noticed. In the case of BD-P, the addition of benzyl moiety induced a loss of one active hydrogen in the structure, reducing the cross-linking density and thus increasing the flexibility of the network (ν BDA = 460 mol·m −3 against 1146 for MXDA-ref). For IPTA-based thermosets, the active hydrogen functionality up to six, increased the cross linking-density (ν IPTA1 = ν IPTA2 = 1311 mol·m −3 ). However, the presence of aliphatic chains induced a higher molecular length, allowing higher microscopic deformation and exhibiting lower T g (in increasing order of molecular length: T g of MXDA-ref = 116 • C ≈ T g of DYTEK ® A-ref, T g of IPT1-P = 99 • C, and T g of IPT2-P = 89 • C).
Finally, the storage modulus of vitreous (E' glassy ) and rubbery (E' rubbery ) domains were determined respectively at T (α−80) and T (α+80) using DMA analyses, allowing to obtain macroscopic deformation information. The storage modulus (E' rubbery ) is also linked to the cross-linking density, according to the rubber-elasticity theory [78]. The results showed similar storage modulus in the elastic domain for IPTA-based thermosets with an E' rubbery order of magnitude of 10 7 Pa, which corresponds to a high mechanical strength for thermosets compared to classical high performance thermosets such as MXDA-ref and DYTEK ® A-ref materials [73][74][75]. It is interesting to note that similar mechanical strength was obtained with a lower T g in the case of IPTA-based materials. These results were induced by the particular structure of IPTA compared to MXDA and DYTEK ® A references. IPTA exhibits indeed six -NH active functions, increasing the cross-linking density and thus the mechanical strength at the macroscopic scale, while the longer molecular length, induced by the presence of two alkyl chains in IPTA backbone, decreases the T g at the microscopic scale. In the case of BD-P and FD-P, which exhibits three -NH active functions, results showed a lower order of magnitude of 10 6 Pa with a lower value for FD-P (E' rubbery of BD-P = 4.84 × 10 6 Pa and E' rubbery of FD-P = 1.30 × 10 6 Pa), meaning a lower mechanical strength than IPT1-P, IPT2-P and BD-P. The slightly lower E' rubbery value could be assumed by the higher aromaticity of benzene moieties than furan ring [79] which induces higher π-stacking in BDA-based materials [80,81]. By comparing BD-P and FD-P to DYTEK ® A-ref, it should be noted that the loss of one -NH active function induced a lower mechanical strength due to the decrease of cross-linking density. On the contrary, IPTA-based materials exhibited similar storage modulus thanks to higher cross-linking density despite the lower T g value.
Conclusions
New pluri-functional amine hardeners based on various bio-based aldehydes were synthesized using the reductive amination method. Three amine monomers were obtained: IPTA, BDA and FDA from isophthalaldehyde, benzaldehyde and furfuraldehyde, respectively. IPTA exhibits six active amine hydrogens versus three for BDA and FDA. The molecular weight of IPTA and thus its properties could be modified by changing reaction conditions (dropwise addition or one-pot conditions). All hardeners exhibited no or slight color, without odor. As expected, theses hardeners are liquid with low viscosity (IPTA1, BDA and FDA < 0.6 Pa s at 22 • C), a lower viscosity than our previously synthetized β-hydroxylamine hardeners (>300 Pa s at 50 • C) [21]. Due to their low viscosities, the epoxy-amine mixtures were easily homogenized and rapidly cured with an optimal epoxy-amine ratio, using DGEBA as petro-sourced epoxy reference and DGEVA as bio-based epoxy monomer. Synthesized thermosets showed great thermo-mechanical properties with higher results for IPTA-based materials, which showed similar mechanical strength and cross-linking density than In fact, the functionality of six active hydrogens of IPTA led to unique, highly cross-linked systems exhibiting lower T g due to the presence of two alkyl chains in the IPTA backbone, allowing microscopic scale deformation, and keeping high mechanical strength at the macroscopic scale, similar to industrial references. | 2019-09-12T13:06:32.048Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "44df4cdfc7d0116b0f961779931b0008f3543db0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/24/18/3285/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53b9c0b7b8529856e6f5f3b595004a4efcc5f1f7",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
231839449 | pes2o/s2orc | v3-fos-license | Function-Correcting Codes
In this paper we study function-correcting codes, a new class of codes designed to protect the function evaluation of a message against errors. We show that FCCs are equivalent to irregular-distance codes, i.e., codes that obey some given distance requirement between each pair of codewords. Using these connections, we study irregular-distance codes and derive general upper and lower bounds on their optimal redundancy. Since these bounds heavily depend on the specific function, we provide simplified, suboptimal bounds that are easier to evaluate. We further employ our general results to specific functions of interest and compare our results to standard error-correcting codes, which protect the whole message.
I: Summary of results on the optimal redundancy of FCCs. The entries marked with superscript * are approximations for large dataset dimensions k and expressiveness E (where applicable), and fixed number of errors t, where lower order terms are neglected. The redundancy of FCCs is displayed for the case where Hadamard matrices of correct size exist, cf. Lemma 3. These restrictions and regimes are chosen to allow for better comparison, however our results are not restricted to these regimes. Precise definitions of the displayed functions can be found in Section IV (binary and locally binary), Section V-A (Hamming weight), Section V-B (Hamming weight distribution) and Section VI (min-max Locally binary E 2t t log k * log E + t log log E * 2t Hamming weight wt(u) -By this definition, given any y, which is obtained by at most t errors from Enc(u), the receiver can uniquely recover f (u), if it has knowledge about the function f (•) and the encoding function Enc(•). Noteworthily, only codewords that originate from information vectors (messages) that evaluate to different function values need to have distance at least 2t + 1. Throughout the paper, a standard error-correcting code is an FCC for f (u) = u, i.e., a code that allows to reconstruct the whole message u. We summarize some basic properties of FCCs in the following.
• For any bijective function f , any FCC is a standard error-correcting code. • For any constant function f , the encoder Enc(u) = u is an FCC with redundancy 0. • If the encoder has no knowledge about the function f , function-correction is only possible using standard error-correcting codes. Note that the encoding and decoding complexity of FCCs may be higher or lower than that of standard error-correcting codes and heavily depends on the function f .
The main quantity of interest in this paper is the optimal redundancy of an FCC that is designed for a function f .
The optimal redundancy r f (k, t) is defined as the smallest r such that there exists an FCC with encoding function for the function f . ij the (i, j)th entry of D. For any two real numbers a, b ∈ R, we define the closed and half-closed interval by [a, b] ≜ {x ∈ R : a ≤ x ≤ b} and [a, b) ≜ {x ∈ R : a ≤ x < b}. We denote by N 0 the set of non-negative integers. Note that while our quantitative results in this paper are for substitution channels, the concepts can be generalized to other channels.
III. GENERIC FUNCTIONS
This section is devoted to establishing general results on FCCs. We start by showing the equivalence of FCCs, irregulardistance codes (Definition 4), and independent sets 2 in an associated graph (Definition 5). We proceed afterwards with establishing several lower and upper bounds on the optimal redundancy of FCCs using these connections.
We begin with introducing irregular-distance codes. To this end, define the distance matrix of a function f as follows.
Definition 3. Let u 1 , . . . , u M ∈ Z k 2 . We define the distance requirement matrix D f (t, u 1 , . . . , u M ) of a function f as the M × M matrix with entries Let P = {p 1 , p 2 , . . . , p M } ⊆ Z r 2 be a code of length r and cardinality M . Here, we choose r as the code blocklength, as we will relate the code length r to the redundancy of FCCs later. Irregular-distance codes are formally defined as follows. With this definition, a D-code requires individual distances between each pair of codewords. Next, we define a function-dependent graph, whose independent sets, if large enough, form an FCC. The vertices constitute possible codewords of the FCC and we connect two vertices, if they can be contained together in an FCC.
Definition 5.
We define G f (k, t, r) to be the graph with vertex set V = {0, 1} k × {0, 1} r , such that each vertex has the form x = (u, p) ∈ {0, 1} k+r . Two vertices x 1 = (u 1 , p 1 ) and We denote by γ f (k, t) the smallest integer r such that there exists an independent set of size 2 k in G f (k, t, r).
This graph resembles the characteristic graph in [9], however differs due to the fact that u is observed through the channel and that functions depend on the whole message vector in our problem formulation. Note that the edges between vertices with u 1 = u 2 enforce the property that each information vector u is assigned exactly one redundancy vector p(u). Fig. 2 visualizes the graph G f (k, t, r) and a corresponding FCC for a concrete example.
We find the following central connection between the redundancy of optimal FCCs, irregular-distance codes, and independent sets in the associated graphs..
Theorem 1. For any function
where {u 1 , . . . , u 2 k } = Z k 2 are all binary vectors of length k. Proof. The first equality is immediate as an independent set in G f (k, t, r) exactly captures the required properties of an FCC. Further, the independent set has to have size 2 k such that there is one codeword for every message vector.
Next, we see that r f (k, t) ≥ N (D f (t, u 1 , . . . , u 2 k )) is necessary, as assuming to the contrary that r f (k, t) < N (D f (t, u 1 , . . . , u 2 k )) implies that there must exist two redundancy vectors p i and p j , i ̸ = j with d(p i , p j ) < 2t + 1 − d(u i , u j ) and hence d(Enc(u i ), Enc(u j )) = d(u i , u j ) + d(p i , p j ) < 2t + 1, which contradicts Definition 1.
On the other hand r f (k, t) ≤ N (D f (t, u 1 , . . . , u 2 k )), as using a correctly assigned D f (t, u 1 , . . . , u 2 k )-code for the redundancy vectors gives an FCC. Remark 1. The irregularity of the distance profile of FCCs comes from imposing distance constraints on the redundancy vectors as opposed to codewords. In our analysis, we found this approach to naturally capture the interplay between the message and the redundancy part and to help with the derivation of simplified bounds and constructions, which are presented in the sequel.
With the result of Theorem 1, one can deduce insights into FCCs using known results about the sizes of independent sets in general graphs, such as [19], [20].
However, the problem of finding optimal FCCs requires the determination of whether the size of the largest independent set meets the threshold 2 k . The related problem of finding a maximal independent set in arbitrary graphs is known to be NP-complete [21], which indicates that also the problem of finding optimal FCCs is complex, unless the structure of the analyzed function f imposes an easily tractable graph structure that simplifies the search for large independent sets.
This implies that the construction of optimal FCCs may become computationally infeasible for large parameters and unstructured functions. To cope with such scenarios, we derive simplified, possibly sub-optimal, results on irregular-distance codes, in order to facilitate the research for arbitrary functions. We proceed with deriving results that act on a smaller set of information vectors and ease the derivation of analytical results.
A. Simplified Redundancy Lower Bounds
We first compute simplified lower bounds on the optimal redundancy of FCCs. Using an arbitrary subset of information vectors u 1 , . . . , u M with M ≤ 2 k we can obtain a lower bound on the redundancy as follows.
2 be arbitrary different vectors. Then, the redundancy of an FCC is at least For Proof. The first statement is immediate, since any subset of information vectors must also fulfill the FCC conditions.
Finding N (D f (t, u 1 , . . . , u 2 k )) is in general quite difficult and it can be easier to focus only on a small but representative subset of information vectors. However, the particular subset heavily depends on the function itself and it is not possible to give a generic approach on how a good subset can be found. Loosely speaking, good bounds are obtained for information vectors that have distinct function values and are close in Hamming distance. Throughout this paper, we will provide some insights on good choices of information vectors using illustrative examples.
B. Simplified Existential Bounds
We proceed with simplifying Theorem 1 in order to obtain easier computable existential bounds. We start by defining the distance between two function values. Definition 6. The distance between two function values f 1 , f 2 ∈ Im(f ) is defined as the smallest distance between two information vectors that evaluate to f 1 and f 2 , i.e., Note that the distance d f (f 1 , f 1 ) = 0, ∀f 1 ∈ Im(f ). The function-distance matrix of f is thus defined as follows.
One way to construct FCCs is to assign the same redundancy vector to all information vectors u that evaluate to the same function value. This is not a necessity, however it gives rise to the following existence theorem.
Theorem 2.
For any arbitrary function f : Proof. We describe how to construct an FCC. The redundancy vectors are chosen to depend only on the function value of u, i.e., the encoding mapping is defined by u → (u, p(f (u))). Denote by p i the redundancy vector assigned to all u with f (u) = f i . Therefore, two information vectors with the same function value have the same redundancy vectors. We then choose p 1 , . . . , By Definition 4 we can guarantee the existence of such parity vectors p 1 , . . . , p E , if they have length N (D f (t, f 1 , . . . , f E )).
There are cases in which the bound in Theorem 2 is tight. We characterize one important case in the following corollary, which is a consequence of Corollary 1 and Theorem 2.
Even though the bound in Theorem 2 is not necessarily tight, in many cases it is much easier to derive the function distance matrix D f (t, f 1 , . . . , f E ) than the distance requirement matrix D f (t, u 1 , . . . , u 2 k ) and the corresponding value N (D f (t, f 1 , . . . , f E )), especially when E is small.
C. Irregular-Distance Codes
We summarize some results about N (D) here, which allow us to obtain results on the redundancy of FCCs using Theorems 1 and 2. We start with a generalization of the Plotkin bound [22] on codes with irregular distance requirements.
Proof. We start by proving the statement for M even. Let p 1 , . . . , p M be codewords of a D-code of length r. Stack these codewords as rows of a matrix P . Since each column of the matrix P can contribute at most M 2 4 to the sum i,j:i<j d(p i , p j ) (when the weight of the column is exactly M 2 ), we have that i,j:i<j d(p i , p j ) ≤ r M Proof. We describe how to construct a code of length r meeting the distance requirements by iteratively selecting valid codewords. Assume first for simplicity that π(i) = i. Start by choosing an arbitrary codeword p 1 ∈ Z r 2 . Then, choose a valid codeword p 2 as follows. Since the distance of p 1 and p 2 needs to be at least [D] 12 , we choose an arbitrary p 2 such that d(p 1 , p 2 ) ≥ [D] 12 . Such a codeword p 2 exists, if the length satisfies 2 r > V (r, [D] 12 − 1). Next, we choose the third codeword p 3 . Similarly as before, we need to have d(p 1 , p 3 ) ≥ [D] 13 and also d( we can guarantee the existence of such a codeword p 3 . The theorem then follows by iteratively selecting the remaining codewords p j such that d(p i , p j ) ≥ [D] ij for all i < j. Under the condition of the theorem, we can guarantee existence of all codewords. Since the codewords can be chosen in an arbitrary order, the lemma holds for any order π in which the codewords are selected.
Note that for codes with [D] ij = D, this bound results in the well-known Gilbert-Varshamov bound [23], [24]. Several of our results in the following require codes of small cardinality, i.e., the code size is in the same order of magnitude as the minimum distance. The following result is based on Hadamard codes [25], [26].
The range of the parameter D is restricted to the limited knowledge of lengths for which Hadamard codes exist. Note that there exist other good codes of small size, such as weak flip codes [27], however, they only attain the Plotkin bound for a limited range of parameters. In general, it is possible to puncture or juxtapose Hadamard codes (cf. Levenshtein's theorem [25,Section 2.3]) to obtain codes for a larger range of parameters. However, for our discussion, the application of the Gilbert-Varshamov bound is sufficient and further allows to prove the existence of codes whose size is quadratic in their minimum distance as follows. The proof of Lemma 4 is obtained using Lemma 2 together with [28, Lemma 4.7.2] and is presented in Appendix B. This result means that, given that the size of the code is moderate, i.e., M ≤ D 2 , for large D, the optimal length of an error-correcting code approaches 2D. While Lemma 4 gives a slightly weaker bound than Lemma 3, it holds for any D and for larger code sizes M . Note that a similar bound as in Lemma 4 can easily be derived also for larger M , i.e., M ≤ D m , m > 2, however m = 2 is sufficient for the subsequent analysis.
In the following sections, we turn to discuss specific functions and give bounds on their optimal redundancy, which are tight in several cases. For several instances we additionally give explicit code constructions that can be encoded efficiently. The functions under discussion are locally binary functions, the Hamming weight function, the Hamming weight distribution function, the min-max function and a collection of discretized real-valued functions.
IV. LOCALLY BINARY FUNCTIONS
In the following we define a broad class of functions, called locally binary functions. We derive their optimal redundancy and show how it can be obtained using a simple explicit code construction. This class of functions is defined next.
Locally binary functions are defined as follows.
Intuitively, a ρ-locally binary function is a function, where the function regions of all function values are well spread in the sense that each information word is close to only one region of another function value, see Fig. 3. Note that by this definition, any binary function, i.e., |Im(f )| = 2, is also a ρ-locally binary function for arbitrary ρ. We can directly prove the following optimality.
Proof. By Corollary 1, r f (k, t) ≥ 2t. On the other hand, we can prove achievability using the following explicit code construction. Let Im(f ) = {f 1 , . . . , f E } and set w.l.o.g. f i ≜ i. Let u be the information word to be encoded and define the following function, i.e. the 2t-fold repetition of the bit ω 2t (u). This gives an FCC for the function f due to the following. Assume (u, p) = Enc(u) has been transmitted and (u ′ , p ′ ) has been received. The decoder first computes The decoder performs a majority decision over the 2t + 1 bits (ω 2t (u ′ ), p ′ ) and obtains correctly ω 2t (u), as at most t out of these 2t + 1 bits are erroneous. Finally, the receiver decides for max It is noteworthy that the code construction used in Lemma 5 leverages the side information provided by the message u using ω 2t (u ′ ) for decoding, which allows to achieve a redundancy of only 2t. This side information is particularly useful for locally binary functions due to the structured topology of the function regions, which is visualized in Fig. 3. Ignoring this side information would require significantly more redundancy, cf. Table I.
In Section V-B we will present an explicit example of a locally binary function. For illustration, another example of a locally binary function is presented in the following.
V. FUNCTIONS BASED ON THE HAMMING WEIGHT
In this section we study two functions: the Hamming weight function f (u) = wt(u) and the Hamming weight distribution function f (u) = ∆ T (u) = ⌊ wt(u) T ⌋, for a given threshold T .
A. Hamming Weight Function
Let f (u) = wt(u), where u ∈ Z k 2 . Note that the expressiveness of wt(•) is E = |Im(wt)| = k + 1. We start by showing that for this function it is possible to achieve optimal redundancy by an encoding function which only depends on the function value, i.e., the Hamming weight of u. Throughout this section we refer to the function distance matrix D wt (t, f 1 , . . . , f E ) as D wt (t) for ease of notation.
Proof. The function values of the Hamming weight function belong to Im(wt) = {0, 1, . . . , k} and we let i, j ∈ {0, 1, . . . , k} denote two function values. First, we see that the function distance is given by . On the other hand, using . . , k}, we see that wt(u i ) = i and their pairwise distances are d(u i , u j ) = |i − j|. We can then apply Corollary 1 to obtain r wt (k, t) ≥ N (D wt (t)).
The following example visualizes the general structure of the function distance matrix D wt (t).
Example 2. The function distance matrix D wt (2) for k = 6 is given by the symmetric 7 × 7 matrix with entries Based on Lemma 6, we can infer a lower bound on the redundancy using the Plotkin-like bound of Lemma 1.
Corollary 3. For any k > t, Proof. Let {p 1 , . . . , p k+1 } be a D wt (t)-code. We will prove the corollary by applying the Plotkin-type bound on a subcode of p 1 , . . . , p k+1 . Consider the first t + 2 codewords p 1 , . . . , p t+2 . By Lemma 6, we have that With this strengthened bound, the sum of the pairwise distances in Lemma 1 can be increased by one and we obtain Hereby, inequality (a) follows from Lemma 1, with an additional summand of 1 due to the fact that d(p 1 , p 2 ) + d(p 1 , p 3 ) + d(p 2 , p 3 ) must be even, as explained above. Eq. (b) follows from summing over the diagonals of D wt (t).
For the following results, we require the shifted modulo function, which is defined as follows.
where the p i 's are defined depending on t as follows. For t ≥ 3, let p 1 , . . . , p 2t+1 be a code with minimum distance 2t, i.e., d(p i , p j ) ≥ 2t for all i, j ≤ 2t + 1, i ̸ = j and set p i = p i smod (2t+1) for i ≥ 2t + 2.
We can use Corollary 3 to narrow down the optimal redundancy of FCCs for the Hamming weight function as follows.
Lemma 7. For any k > 2, r wt (k, 1) = 3 and r wt (k, 2) = 6. Further, for t ≥ 5 and k > t, Recall here that using a standard error-correcting code with minimum distance 2t + 1, e.g., a BCH code, results in a redundancy of roughly t log k. Therefore, using FCCs, we can improve the scaling of the redundancy by a factor of log k. While we find the optimal redundancy exactly for t = 1 and t = 2, there is still a gap for t ≥ 3 narrowing down the optimal redundancy between roughly 10t 3 and 4t.
B. Hamming Weight Distribution Function
Let in the following T ∈ N be a parameter of choice. For simplicity, we restrict T to divide k + 1. Consider the function We directly see that the number of distinct function values is equal to E = k+1 T . This function defines a step threshold function, based on the Hamming weight of u, with E − 1 steps. The threshold values, where the function values increase by one, are at integer multiples of T , see Fig. 4. We restrict to the case where 2t + 1 ≤ T and will give an optimal construction with redundancy r ∆ T (k, t) = 2t in this regime. First, note that, when 4t + 1 ≤ T , we can show that ∆ T (u) is 2t-locally binary, as two consecutive thresholds have distance at least 4t + 1. Consequently, r ∆ T (k, t) = 2t by Lemma 5. We now focus on the more general case, where 2t + 1 ≤ T . We start by describing the encoding function. Recall the shifted modulo operation from Definition 10.
Construction 2. We define
Enc ∆ T (u) = (u, p wt(u) ), We show that this encoding function gives an FCC for the Hamming weight distribution function ∆ T (u).
VI. MIN-MAX FUNCTIONS
Assume now that k = wℓ for some integers w and ℓ. In this section, we consider u to be formed of w parts, such that u = (u (1) , . . . , u (w) ), where each u (i) ∈ Z ℓ 2 is of length ℓ. The function of interest is the min-max function defined next. Definition 11. The min-max function is defined by where u = (u (1) , . . . , u (w) ), u (i) ∈ Z ℓ 2 with k = wℓ and the ordering < between the u (i) 's is primarily lexicographical (the left-most bit is the most significant) and secondarily, if u (i) = u (j) , according to ascending indices.
For example, u = (u (1) , u (2) , u (3) ) = (100, 010, 010), has the ordering u (2) < u (3) < u (1) and thus mm w (u) = (2, 1). For w = 1, the function is constant and for w = 2, the function is a binary function and we have an optimal solution from Lemma 5. For w ≥ 3, we provide two lower bounds on the redundancy in Lemma 9 and Corollary 4. We characterize the function distance matrix of the min-max function in Claims 1 and 2 and obtain an upper bound on the redundancy based on Theorem 2, which is derived in Lemma 10. Since Lemma 10 is obtained using a Gilbert-Varshmov argument, the result is of existential nature. We construct explicit FCCs based on standard error-correcting codes in Construction 3 and Construction 4. Throughout this section we refer to the function distance matrix D mm (t, f 1 , . . . , f E ) as D mm for ease of notation. The following example illustrates our results.
Example 3. Consider a min-max function with w = 3 and ℓ ≥ 3. From Claim 1 and Claim 2 we obtain the function distance matrix D mm for this case and any t as follows For example, the function distance between the function values (1, 2) and (1, 3) is 1 since there exist information words u 1 = (000, 010, 001) and u 2 = (000, 010, 011) such that d(u 1 , u 2 ) = 1 and mm w (u 1 ) = (1, 2) and mm w (u 2 ) = (1, 3). For w = 3 this holds for all pairs of function values except for those of the form (i, j) and (j, i) where at least two bits must be changed to move from one function value to another, i.e., for every u 1 , u 2 such that mm w (u 1 ) = (i, j) and mm w (u 2 ) = (j, i), we have d(u 1 , u 2 ) ≥ 2, cf. proof of Claim 2. A possible construction for an FCC is to use a code with cardinality w(w −1) = 6 and distance matrix D mm in the fashion of Theorem 2, i.e., the redundancy vectors are encoded based on f (u) instead of u. Note that we will observe later that such an encoding yields a redundancy that is not too far from optimality. From Lemma 9, which is presented in the sequel, for w = 3 the optimal FCC redundancy is at least 10t/3 − 11/6. On the other hand, using single-parity check codes, we will construct next an FCC for w = 3 with redundancy r SP = 4t in Construction 3.
We now formally present our results. We start with the lower bound on the redundancy. Lemma 9. For w ≥ 3 and ℓ ≥ 2, the optimal redundancy r mmw (k, t) is bounded from below by Proof. Let u i,j ∈ Z k 2 with i, j ∈ [w], i ̸ = j be w(w−1) information vectors that will be specified later and D mm (t, u 1,2 , . . . , u w−1,w ) be their distance matrix. We use the result of Corollary 1 and Lemma 1 to obtain r mmw (k, t) ≥ N (D mm (t, u 1,2 , . . . , u w−1,w )) We first prove the lower bound for ℓ = 2. To obtain a good lower bound, we need to find a suitable set of w(w − 1) representative information vectors and characterize their distance matrix D mm (t, u 1,2 , . . . , u w−1,w ). We choose the representative information vectors u i,j to be u i,j = (01, . . . , 01, 00 , 01, . . . , 01), where i, j ∈ [w] and i ̸ = j. Note that mm w (u i,j ) = (i, j) and therefore the corresponding function values are all distinct.
for function values which agree either in the minimum or maximum value. Further, d(u i,j , u i ′ ,j ′ ) = 4 for function values that agree neither on the minimum nor maximum. For a given u i,j , there are thus Therefore, each row of D mm (t, u 1,2 , . . . , u w−1,w ) has 2(w − 2) entries that are equal to 2t − 1 and (w − 1)(w − 2) + 1 entries that are equal to 2t − 3. Having characterized the values of the entries of the distance matrix, we can now write Equation (a) follows from the symmetry of the matrix D mm (t, u 1,2 , . . . , u w−1,w ) and equality (b) follows by replacing the values discussed above and rearranging the terms. This proves the lower bound of Lemma 9. The proof for all ℓ > 2 follows the same steps after setting the ℓ − 2 left-most bits in every part of each u i,j to 0.
While this bound provides a good bound for large t and moderate w, we can derive a stronger bound for fixed t and large w as follows.
Proof. From the proof of Lemma 9, we know that r mmw (k, t) ≥ N (D mm (t, u 1,2 , . . . , u w−1,w )). This quantity however can be bounded from below by noting that d(u i,j , u i ′ ,j ′ ) ≤ 4 for any i, j, i ′ , j ′ (as shown in the same proof) and thus N (D mm (t, u 1,2 , . . . , u w−1,w )) ≥ N (w(w − 1), 2t − 3). In other words, the w(w − 1) vectors must form a code of minimum distance 2t−3. Abbreviating r ≜ N (w(w−1), 2t−3) it follows from a sphere packing argument that r i is the size of the radius-t Hamming sphere over vectors of length r. Consequently, where in (a), we used the inequality r ≥ log w(w − 1).
We provide two upper bounds on the optimal redundancy r mmw (k, t) of FCCs designed for the min-max function. The first bound (Corollary 5) follows from Lemma 4 and uses standard error-correcting codes. On the other hand, the second bound (Lemma 10) is obtained by examining the function distance matrix of the min-max function and using irregular-distance error-correcting codes.
Corollary 5 (Corollary of Lemma 4). Given t and w such that t ≥ 5 and w(w − 1) ≤ 4t 2 , the optimal redundancy r mmw (k, t) is bounded from above by Proof. Encoding the parity vectors with an error-correcting code of minimum distance 2t results in an FCC. The redundancy of this FCC is then equal to the length of the used code. Therefore, the bound holds from Lemma 4.
Lemma 10. For w ≥ 3 and ℓ ≥ 3, the optimal redundancy r mmw (k, t) of FCCs is bounded from above by We start by bounding the distance between any two function values. Claim 1. Consider a min-max function as defined in Definition 11. For all w ≥ 3 and ℓ ≥ 3 the minimum distance between any two function values (cf. Definition 6) f 1 and f 2 is at most 2, i.e., To prove Claim 1 we need to show that for every two function values f 1 ̸ = f 2 , there exist two information vectors u 1 ̸ = u 2 such that mm w (u 1 ) = f 1 , mm w (u 2 ) = f 2 and d(u 1 , u 2 ) = 2. We show the existence of such information vectors in Appendix C. Given the result of Claim 1, we know that the entries of D mm , [D mm ] ij = 2t + 1 − d mmw (f i , f j ), are bounded from below by 2t − 1. The remaining part is to count the number of entries that satisfy [D mm ] ij = 2t, i.e., the number of values i, j for which d mmw (f i , f j ) = 1. We show that the number of such entries is equal to 4w(w − 1)(w − 2) + 2(w − 1) by counting the number of function values that satisfy d mmw (f i , f j ) = 1.
Claim 2.
Consider a min-max function as defined in Definition 11. For all w ≥ 3 and ℓ ≥ 3, given a function value f 1 = (i, j), the number of function values f 2 ̸ = (i, j) that satisfy d mmw (f 1 , f 2 ) = 1 is 4(w − 2). Therefore, the number of entries in D mm that is equal to 2t is equal to 4w(w − 1)(w − 2).
The proof of Claim 2 consists of finding for every function value f 1 the number of distinct function values f 2 that can be obtained by changing one bit in any information vector u satisfying mm w (u) = f 1 . A formal proof is provided in Appendix D. The results of Claim 1 and Claim 2 characterize the entries of the function distance matrix D mm . Recall that Theorem 2 implies that r mmw (k, t) ≤ N (D mm ) .
We use Lemma 2 and the results of Claim 1 and Claim 2 to prove Lemma 10. From Lemma 2 and by symmetry of D mm we have and π is a permutation of the integers in [w(w − 1)]. Note that is summing all the entries of a given row π(i) of D mm . Thus the maximum of this sum can be bounded from above by setting i = w(w − 1) and choosing a row with the largest entries.
From Claim 1 and Claim 2, we know that a row i with maximum entries contains exactly one entry equal to 0, 4w − 8 entries equal to 2t and the rest is equal to 2t − 1. Given this observation, we obtain that Φ(r) in the Lemma statement is a lower bound to Φ ′ (r) and the lemma follows.
We give an FCC based on the single-parity check code in Construction 3.
Construction 3.
Let C SP be a subcode of the single-parity check code of size w(w − 1). Replicate every bit in the codewords of C SP to t bits. Assign a unique codeword of the expanded version of C SP to a redundancy vector p i,j used for all information vectors u such that f (u) = (i, j). Lemma 11. Construction 3 is an FCC for the min-max function and has redundancy r SP = t(⌈log (w(w − 1))⌉ + 1).
Proof. The lemma follows from the following observations: 1) The length of each codeword in C SP is ⌈log (w(w − 1))⌉ + 1; 2) the minimum distance of C SP is 2; and 3) replicating every bit in the codewords of C SP gives the desired code of length t (⌈log (w(w − 1))⌉ + 1), cardinality w(w − 1) and minimum distance 2t.
Following the arguments of Lemma 11, it is clear that Construction 4 gives an FCC for the min-max function with redundancy r RM = 2 m . To see the importance of this construction, consider the example where t is a power of 2 and w ≤ √ 8t. Then, one can use an RM(1, log(4t)) to obtain an FCC for the min-max function with redundancy equal to 4t which is asymptotically, for large w, only 3 bits away from the lower bound of Lemma 9.
VII. REAL-VALUED FUNCTIONS
In this section we apply our theoretical results on FCCs to a collection of real-valued functions that take a real number as input and output a real number, i.e., functions of the form g : R → R. Throughout this work, however, we consider digital functions that take binary vectors as input and have an arbitrary output, i.e., we consider functions of the form f : Z k 2 → Im(f ). To this end, let b2r : Z k 2 → R be a mapping from the binary information vectors to real numbers. Thus, throughout this section considered functions are 3 RV g (u) = g(b2r(u)), where g is one of the functions presented below. While our ideas apply to several binary representations, we opt to explain the results using a fixed precision quantization b2r as follows. Given a fixed precision ϵ > 0, the mapping b2r maps the binary vectors to intervals of size ϵ, i.e., b2r(u) = ϵ bin2dec(u) − 2 k−1 + 0.5 , where bin2dec : Z k 2 → {0, 1, . . . 2 k − 1} is the standard mapping from binary to decimal. Notice that this way, real values in the range of ±(2 k−1 − 0.5)ϵ can be represented, which will be called quantization intervals hereafter.
• Hyperbolic tangent function: tanh(x) = e x −e −x e x +e −x and its derivative ∂ tanh(x) ∂x = 1 − tanh(x) 2 . Those functions have practical importance as they are activation functions, and their derivatives, used in neural networks 4 . Throughout this section we discuss FCCs whose encoding is based on the function value only, as in Theorem 2. Thus, the defining quantity of interest is the function distance matrix D RVg (t, f 1 , . . . , f E ), which we abbreviate by D g .
We now study the three functions mentioned above. We delay the study of the derivatives of the sigmoid and tanh(x) function for only after Lemma 13. These three functions can be divided into two classes: a class of functions that are bijective on a certain interval and constant (output equal to 0) otherwise, such as the ReLU function; and a class of functions that are bijective on a certain interval and have approximately constant output for small and large values of x, such as the sigmoid and tanh functions. To see this division notice that numerically one can consider tanh(x) = 1 for all x ≥ 6 and tanh(x) = −1 for all x ≤ −6. Similarly σ(x) = 1 for x ≥ 10 and σ(x) = 0 for x ≤ −10. The ReLU function is a bijective function for all x > 0 and is 0 otherwise (cf. Fig. 5).
Let [a, b] ⊂ R be the interval in which the function g is bijective and assume for simplicity that ϵ divides b−a. For notational convenience, we denote the binary vector representing a certain quantization center c by w i , u i or v i if c < a, a ≤ c ≤ b or c > b, respectively. In addition we define d(u i , v) ≜ min ℓ d(u i , v ℓ ) to be the Hamming distance between the binary vector u i representing a quantization center c 1 ∈ [a, b] and all binary vectors representing a quantization center c 2 < a. We define d(u i , v) and d(v, w) similarly.
We characterize the redundancy of an FCC for the considered real-valued functions in Lemma 12 and Lemma 13. Let g 00 : R → R be a real-valued function that is bijective on an interval [a, b] ⊂ R and equal to 0 on R \ [a, b]. Fix an ϵ > 0, and define the symmetric square matrix D RV00 with (b − a)/ϵ + 1 rows that has for i ≤ j the entries Lemma 12. The redundancy of an FCC for the function RV g00 is bounded from above by If only one input vector evaluates to 0, the upper bound becomes the optimal redundancy of an FCC for this function. The same holds if several input vectors evaluate to 0 and have similar distance profiles to each of the u i 's. This observation holds for the next lemma as well.
Proof. The proof follows from Theorem 2 by designing the parities based on the function values. On a high level, since all input values in R \ [a, b] have the same output value, then the codewords of the form (w i , p) and (v j , p) are allowed to be confusable after t errors and can thus have a distance less than 2t + 1. However, the codewords of the form (u i , p) cannot be confusable with any other codeword after t errors. Therefore, for every u i we search for the closest (in Hamming distance) v or w and design the parity vector of the v j 's and w j 's accordingly. The same is done for u i and u j for i ̸ = j.
Let g 01 : R → R be a real-valued function that is bijective on an interval [a, b] ⊂ R and satisfies g 01 (x) = 0 for all x < a and g 01 (x) = 1 for all x > b. Fix a precision ϵ and, for ease of notation, define v ≜ u b−a ε +1 and w ≜ u b−a ε +2 . Let the symmetric matrix D RV01 with (b − a)/ϵ + 2 rows be defined as follows Lemma 13. The redundancy of an FCC for the function RV g01 (u) = g 01 (b2r(u)) is bounded from above by The proof is omitted as it follows the same steps of the proof of Lemma 12 while taking care of not confusing any of the v i 's with any of the w i 's. Now we study the derivative of σ(x) and tanh(x). Both functions are symmetric around 0 and are bijective on an interval [0, a] and constant otherwise. Numerically, one could consider the derivative of σ(x) to be equal to 0 outside the interval [−10, 10] and the derivative of tanh(x) to be 0 outside the interval [− 6,6].
For this set of functions we abuse notation and denote by u i the binary representation of a quantization center c ∈ [0, a] and by −u i the binary representation of the quantization center −c. Similarly v i is the binary representation of c > a and −v i is the binary representation of −c < −a. We define d( ±v) similarly. This notation makes the following definitions easier to present.
Let g sym : R → R be a real-valued function with g(x) = g(−x), bijective on an interval [0, a] ⊂ R and is constant for x > a. Fix a precision ϵ and define the symmetric square matrix D RV−sym with a ϵ + 1 rows that has for i ≤ j the entries Lemma 14. The redundancy of an FCC for the function RV gsym (u) = g sym (b2r(u)) is then bounded from above by The proof is omitted as it follows the same steps of the proof of Lemma 12.
VIII. CONCLUSION
We introduced a new class of codes called function-correcting codes which encode a message to allow a successful recovery of a certain attribute or a function value of this message after transmission over an erroneous channel. This encoding potentially reduces the redundancy compared to error-correcting codes by leveraging the side information given to the receiver by the knowledge of the possibly erroneous original message and the desired function.
We considered an encoding setup in which the message itself is also transmitted and restricted our attention to substitution channels with at most t errors. For this setting, we derived lower and upper bounds on the redundancy of FCCs by establishing a connection to irregular-distance codes. Further, we examined several functions of interest for which we derived explicit distance matrices, such that an irregular-distance code satisfying the distance matrix gives an optimal FCC for the function at hand. Furthermore, we derived lower bounds and constructed FCCs for each specific function. Our constructions have optimal redundancy for the Hamming weight distribution functions. For the min-max function, we construct almost optimal codes. For the Hamming weight function there is still a gap of roughly 2 3 t between the lower bound and the provided construction, leaving the problem of finding optimal FCCs open. For real-valued functions, a rigorous study of the distance profile of the input vectors is needed to understand the gap between the achievable redundancy and the lower bound. Further research directions on this topic include the study of FCCs for other functions of interest and under different channels.
APPENDIX A DERIVATIONS OF REDUNDANCIES IN TABLE I
We start by deriving the redundancy obtained by employing a standard error-correcting onto the data, labeled as the column "ECC on Data" in Table I. This means, the data vector u is encoded with a systematic code of dimension k and minimum distance 2t + 1. The redundancy part p of this systematic code is then appended to u, resulting in (u, p). Clearly, with such a construction it is possible to reconstruct u at the receiver and thus f (u). It is known [29,Ch. 5.5] that there exists a binary alternant code of length n, minimum distance 2t + 1 and redundancy at most r ≤ t⌈log n⌉. Since n = k + r, It follows that r ≤ t log k + t (1 − t/k log e) and thus, for large k and fixed t, the dominant term is t log k.
We now turn to derive the redundancy obtained by a direct approach of encoding the function values, which corresponds to the column "ECC on Function Values" in Table I. More precisely, we encode the function value f (u) with a (possibly non-systematic) code of cardinality E (recall that E is the size of the image of f ) and minimum distance 2t + 1. The resulting codeword c is then appended to u, resulting in (u, c). Also in this case, it is possible to retrieve f (u) by decoding the function value from the received word corresponding to c and simply ignoring the information part u. In this case, the redundancy of our construction is given by the length of the employed code. Using alternant codes, we obtain for the redundancy of the alternant code r alt ≤ t⌈log(log⌈|E|⌉ + r alt )⌉ ≤ t log log |E| + t + t(1 + r alt ) log e/ log |E| and thus r alt ≤ t log log |E| + t(1 + log e) 1 − t/ log |E| log e .
Consequently, the redundancy of the direct approach is given by the length of the alternating code r = ⌈log |E|⌉ + r alt . For sufficiently large |E| and fixed t this is adequately approximated by log |E| + t log log |E|. . Choosing ϵ = ln(r)/r, we obtain that r = 2D/(1 − 2 ln(r)/r). Here we require r ≥ 10 such that ϵ ≤ 1 2 . We can then use that ln(D)/D ≥ ln(r)/r for r ≥ D ≥ 3 and we obtain the lemma's statement.
APPENDIX C PROOF OF CLAIM 1
Proof of Claim 1. We give a proof for ℓ = 3. For ℓ > 3, we can restrict all the bits of all u (v) , v ∈ [w] to be 0 except for the three least significant bits and apply the same proof of ℓ = 3. We show that for all i, j, i ′ , j ′ ∈ [w], (i, j) ̸ = (i ′ , j ′ ) there exist two information words u, u ′ such that mm w (u) = (i, j) and mm w (u ′ ) = (i ′ , j ′ ), where d(u, u ′ ) = 2. We split the proof into the following three cases.
Note that mm w (u) = (i, j) by definition of mm w . We can change u to u ′ as follows. First flip the third bit of u (i) , (so that u (i) = (001)) to change the function value to (i ′ , j). To change j to j ′ , it is sufficient to flip the first bit of u (j ′ ) . , 010, . . . , 010). (1) Note that mm w (u) = (i, j) by definition of mm w . We can change u to u ′ as follows. Flip the first bit of u (j) , (so that u (j) = (000)) to change the function value to (j, 1) = (i ′ , 1) (or (j, 2), if j = 1). To obtain j ′ as the maximum, it is sufficient to flip the first bit of u (j ′ ) .
APPENDIX D PROOF OF CLAIM 2
Proof of Claim 2. We give a proof for ℓ = 3. For ℓ > 3, we can restrict all the bits of all u (v) , v ∈ [w] to be 0 except for the three least significant bits and apply the same proof of ℓ = 3. Fix f 1 ≜ (i, j) and consider all information words u such that mm w (u) = f 1 . Note that for any u, the u (v) 's form a totally ordered set and therefore can be arranged in a chain, as illustrated in Fig. 6. By definition, for any f 2 with d mmw (f 1 , f 2 ) = 1, there exists a u with mm w (u) = f 1 , such that, by flipping one bit in u, the function value changes from f 1 to f 2 . We find all possible function values that can be obtained after a single bit flip in some u with mm w (u) = (i, j). We distinguish between the following types of edit operations.
1) Change one bit in u (i) . First, change u (i) such that the result becomes larger then u (i) , but smaller than u (j) . This way it is only possible to change the function value to (v, j), for an arbitrary v ∈ [w] \ {i, j}. This can in fact be achieved by choosing u to be u = (011, . . . , 001 , 010 , 011, . . . , 111 , 011, . . . , 011), and flipping the first bit of u (i) (so that u (i) = (101)). Note that mm w (u) = (i, j). Second, change u (i) such that it becomes larger than u (j) . This way, it is only possible to change the function value to (v, i), v ∈ [w] \ {i, j}. This can be achieved by choosing u to be u = (011, . . . , 001 , 010 , 011, . . . , 100 , 011, . . . , 011) and flipping the first bit of u (i) (so that u (i) = (101)). For an illustration, see Fig. 6. 2) Change one bit in u (j) . First, we change u (j) such that the result becomes smaller then u (j) , but larger than u (i) . This way it is only possible to change the function value to (i, v), for an arbitrary v ∈ [w] \ {i, j}. This can in fact be achieved by choosing u to be u = (100, . . . , 000 , 101 , 100, . . . , 110 , 100, . . . , 100) and flipping the first bit of u (j) (so that u (j) = (010)). Note that mm w (u) = (i, j).
3) Change one bit in u (v)
, v ∈ [w] \ {i, j}. This does not yield any additional function values that can be reached, since it is only possible to obtain (v, j) or (i, v). Since the resulting function values in cases 1) and 2) are distinct, for each f 1 , there exist 4(w − 2) values f 2 with f 1 ̸ = f 2 and d mmw (f 1 , f 2 ). Using further, that there are w(w − 1) function values, the total number of entries in D mm that are equal to 2t is equal to 4w(w − 1)(w − 2). | 2021-02-08T02:15:57.771Z | 2021-02-05T00:00:00.000 | {
"year": 2021,
"sha1": "8fe48b2dddb22b74d51fd421b7af0762f51b446d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2102.03094",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8fe48b2dddb22b74d51fd421b7af0762f51b446d",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
251152171 | pes2o/s2orc | v3-fos-license | UI Design and Optimization Method for Museum Display Based on User Behavior Recommendation
In view of the lack of rich display methods in the display design of museums, it is impossible to enhance the interest of visitors. This paper proposes a museum object recommendation method based on collaborative fi ltering, which simpli fi es the display design, improves the recommendation e ff ect, and alleviates the scalability problem. Firstly, the algorithm of recommendation system combines the advantages of memory collaborative fi ltering and uses smoothing processing to improve the e ffi ciency of recommendation and achieve the best consistency. Then, the cross-domain collaborative fi ltering rating matrix generation model is used to establish the correlation between multiple rating matrices by fi nding the shared hidden clustering rating matrix, which also improves the recommendation e ff ect. Finally, the conclusion shows that we can use single user behavior data such as forgetting mechanism to recommend to users. SVD makes full use of the interaction data of various behaviors, and NMF algorithm makes full use of the data of various user behaviors, which can e ff ectively solve the existing problems. The stochastic gradient descent is applied to the SVD algorithm to accelerate the convergence speed of the model, improve the performance of the model, and e ff ectively improve the accuracy of score prediction.
Introduction
The function of the recommendation system is to simply filter the information, and the recommendation system that recommends popular products to users can no longer meet the personalized needs of users. Based on the user's personal basic information and behaviors such as clicks and searches, it finds the products that best meet the user's interests from many products and pushes them to specific users. Traditional recommendation algorithms rely on user rating data to obtain user preferences. Interest clustering is achieved through name tag information. Obtain optimal values for feature attributes and interest cluster analysis [1]. Personalized recommendation system is a course recommendation algorithm that integrates user characteristics and interest clustering [2][3][4]. Due to the limitation of time and product display space, the different characteristics of users are counted, the characteristic attributes are weighted, and the attenuation factor is introduced at the same time, and then, the course prediction score is calculated. Therefore, it needs to maximize its utility through limited resources. On the basis of personal information and its historical behavior information, users can be understood to a certain extent. Recommender systems come into play from this. The similarity of users in each cluster is calculated, and then, the course prediction score is calculated, and the two prediction scoring mechanisms are weighted and integrated by assigning different weights. Through the system to find items that meet the purchase intention for users, their unclear needs can be turned into actual real needs. The recommendation algorithm introduces the user's general characteristics and interest clustering mechanism, fully considers the user's attributes, and effectively improves the accuracy of the recommendation algorithm. The user history preference fusion mechanism is introduced, and then, the user-course evaluation matrix is recombined to construct the user history preference similarity set. Most of the data in behavior logging is very sparse [5][6][7]. Aiming at the dynamic change of user interest in the research of recommendation algorithm, this dataset is widely used in the field of recommendation. In order to adapt to the diversity of user course evaluation data in different time intervals, the preference dataset constructed in the previous step is converted. In the implementation process, the dynamic change curve of user preference and the time decay function are introduced at the same time, which will affect the quality of the subsequent neighbor search and the quality of the algorithm results. Turn it into a dynamic change to fit the changing situation of the user's interests and preferences. The effect of incorporating time factors on recommendation results. At present, there are still many problems in the display design of Chinese museums. Most of the museums focus on the external image in the display design, while ignoring the actual function of the display. The display design lacks in-depth research and sorting of cultural relics, resulting in different styles of display design and cultural relics. The recommendation algorithm based on the dynamic change of user interests can improve the recommendation accuracy. The physical hanging in the showcase has a greater visual impact on the visitors. When it is necessary to display small-sized cultural relics and utensils, the overall volume of the showcase occupies a large space, and it is impossible to customize a special showcase for it. When you look up, you can see the whole picture of the exhibits, and it is easy to notice various details when watching the vertical display exhibits while standing. This display method is suitable for large-area fabrics and clothing without too much three-dimensional tailoring. If the distance between visitors and cultural relics is isolated, the best viewing angle will be lost [8][9][10]. Most artifacts appear out of place throughout the exhibition theme space. Limitations make the actual museum exhibit work against the original intent of the display design. The gravitational effect brought by the suspension will not cause too much damage to the exhibits. Some of the exhibits in the showcase will be tiled to tilt the horizontal plane so that visitors in a certain direction can view the exhibits. Directly placed horizontally is suitable for exhibits with small area, less three-dimensional tailoring and damaged exhibits, such as pieces of fabric and clothing, purses, and fan bags. The display in the showcase is like a window, which can fully display the beauty of clothing. At present, most of the exhibitions in museums only display and display the real objects, and there is no more abundant display methods. Entering the museum, people shed their real identity and become the audience. Viewing is seeing with the eyes, and the audience is the audience of the exhibition content. Common cultural relics are installed in museum display cabinets, and there is a simple introduction to the year of the cultural relics on the display cabinet. In this traditional single display, it is difficult to increase the interest of visitors. According to the user's immediate feedback, the recommender system updates its own recommendation strategy and recommends the user's favorite item (item). It is difficult for visitors to deeply appreciate the story background of the cultural relics in the exhibition. There is no interactive communication between the exhibits and the audience, which makes the viewing efficiency ineffective and the purpose of education promotion is weak. The emergence of digital art display forms has effectively made up for the defects of traditional museum display design [11][12][13]. Digital art is a modern interactive medium, which can make the audience feel immersive and has a strong sense of immersion and interaction. Behavioral streaming recommendation methods can effectively capture users' latest preferences from new data, strengthen people's active and positive side, and enable audiences to participate. Recommend items to users precisely based on their latest preferences. By timely using new data to train the recommendation model, it can quickly learn the preferences of new users and solve the cold start problem caused by the poor interaction data of new users. To enrich people's spiritual and cultural life and encourage the development of individual human nature. There is no need to store a large amount of historical data, which helps to protect user privacy and reduce the negative impact caused by storage. For interactive recommender systems, the spatiotemporal relationship in time affects how humans perceive the world, information, and relationships [14,15]. Recommendation feedback is a dynamic interactive process, which means that user preferences can change over time. The linear narrative method of organizational arrangement is very popular for its fast information dissemination and high acceptance rate. The balance between long-term benefits and short-term benefits, if the recommendation strategy is obsessed with shortterm benefits and neglects to explore long-term benefits, it will not be conducive to a comprehensive modeling of user preferences. In today's information age, the emergence of new technologies affects the way people read and the concept of time and space, and the emergence of digital display enables a nonlinear narrative method that can bring a new experience to the audience into the museum and has the ability to transcend reality to the audience. The philosophical realm and ideological reflection of the life world bring out the experience and cognitive feeling that breaks the routine.
Museum Display Design
2.1. Interactive Recommendation. Network-based methods and forgetting-mechanism-based methods show great potential in multiple sequential decision-making scenarios. Usually, the number of items to be recommended in a recommender system is large, resulting in huge action space and state space. Store representative historical data, and then, sample from the stored historical data and newly received data. The rewards of a large number of stateaction pairs are invisible, and these constraints are not conducive to the recommendation system to learn the optimal recommendation strategy. The recommendation model is trained with the sampled data to capture both the user's recent preferences implicit in the newly received data and the user's long-term preferences implicit in the historical data. Methods based on interactive recommendation models usually use historical behavior records to model the current environment so that rewards for unseen state-action pairs can be predicted. When sampling, the newly received data and historical data are treated equally, ignoring the 2 Wireless Communications and Mobile Computing importance of new data, and it is difficult to capture users' recent preferences. The model-based reinforcement learning recommendation method generates adversarial network to model the environment, which is used to imitate the change process of user behavior and habits and learn the reward function and then use the connected deep Q network DQN. Optimize the current recommendation strategy. A neural memory network is used to maintain both the user's recent and long-term preferences. A model-based multiagent reinforcement learning recommendation method, which is jointly optimized for multiple recommendation scenarios on e-commerce platforms. Frequent updates with a continuous data stream to maintain long-term user preferences. Each recommendation scenario is an agent, and the method can learn an optimal multiscenario recommendation strategy by learning the sequence correlation between multiple agents and optimizing the joint reward of multiple scenarios. Useless interaction data and outliers are filtered out, respectively, so as to effectively use newly received data and historical data to learn users' recent and long-term preferences, respectively. Use generative adversarial networks to model the current recommendation environment to learn offline recommendation policies. Methods that rely on matrix factorization models for recommendation require modeling the current environment. However, some scenarios are difficult to model the environment, which leads to errors in the environment model, as shown in Figure 1.
The Development of Museum Display Design.
People's material needs have been satisfied to the greatest extent with the progress of society and gradually transferred to the spiritual level. The information level of Lianzhu is relatively simple, only the first layer of Lianzhu pattern evolves in a long scroll. Modern people's thoughts are diverse, and their spiritual and cultural needs have changed from various aspects and at multiple levels. Compared with the past, evaluation standards and aesthetic concepts have also undergone great changes. The long scroll is to arrange the restored patterns of the bead pattern drawn by the author according to the known clues of the times. The structure of the bead pattern changes step by step in the process of combining with Chinese traditional culture. The presentation of the exhibition remains monotonous and rigid, and the audience will experience visual fatigue and eventually lose interest in the museum. It is necessary for us to work hard to break the shackles of traditional thinking and develop and create new theories that conform to the trend of the times. There are separate text descriptions for the bead circle, the theme pattern, and the flower, as well as a small map corresponding to the exhibitor's current location. Many museums still maintain very traditional concepts and designs. Whether it is the display of objects or the setting of signs, they are very single and rigid, and the whole visiting process is boring and boring. The overall text description of the beaded fabric pattern, in terms of interactive design, some buttons are in fixed positions. These exhibitions are relatively independent and cannot form a system and rarely interact with people in the real environment, so it is out of reality. The recommendation system model in practical application needs to go through the stages of model design, model training, offline testing, and online operation. If we want to completely change this situation, we must make our ideological theories and concepts change and update in time. In offline testing, the model can often achieve better results by adjusting the parameters. The display space is also the most important, which itself is the spirit of the museum. When put online, model performance tends to decline over time. Through it, it can directly face the history of the museum and give visitors the most direct experience and feeling. The display space of most existing museums is still quite rigid. In practice, users and projects are constantly increasing and changing, adapting to the data model. The grasp of the visitors' own hearts is well controlled and is not in place, and it lacks vividness. The increase of computational time complexity and space complexity causes scalability problems. Museums are mainly responsible for the dissemination and protection of culture, through careful design and planning, to provide services to the public and to promote social development. Therefore, in view of the fact that people are the main body of social change and development, it is necessary to put people first in the process of display design, as shown in Figure 2.
Wireless Communications and Mobile Computing
Historical behavior information: Similarity: Scoring mechanism: Unclear requirements: Historical preference similarity set: Dynamic curve: Time decay function: Recommended accuracy: Update referral policy: Train a recommendation model: Historical data: Interactive recommender system:
Personalized Recommendation Algorithm.
It is inclined to improve the existing offline single-action recommendation methods to make them have the ability to make recommendations using streaming data. The idea of collaborative filtering becomes a highly interpretable technique to incrementally learn users' recent preferences using data streams. Coordinate descent, stochastic gradient descent, and fast alternating least squares for incremental training of recommendation models in streaming scenarios, as shown in Table 1 and
Multibehavior Offline Recommendation Method.
Existing recommendation methods usually only use a single user behavior data such as forgetting mechanism to recommend to users. The efficient search and processing of massive high-dimensional data fail to effectively utilize the interactive data belonging to various user behaviors, as shown in Table 2 Recommendation methods utilize interaction data of various behavioral types, such as browsing, to recommend users in offline scenarios. A variety of advanced learning techniques such as multitask learning and transfer learning are adopted to solve the data sparsity problem caused by insufficient data of a single behavior type. Behavioral offline recommendation method uses the main behavior and a class of auxiliary behaviors to recommend items to users. Treat the secondary behavior type interaction as the primary behavior type interaction. Learning user preferences in OTML-MF interaction data, ml-100k = 0:63, ml-1m = 2:44, λu = 1:87, i = 1:47, η = 1:44, and α = 2:02. Use the interaction of multiple behavior types to make recommendations for users through the alternating least squares method. Do not learn the user's preference for items from the main behavior and auxiliary behavior, and then, make recommendations for the user according to the learned preference. Table 3 and Figure
Overview of Personalized Recommender
Systems. The recommendation system algorithm improves the recommendation efficiency by combining the advantages of memory-based and model-based collaborative filtering by adopting smooth processing. As shown in Table 4 and Figure 7, when the MAF is 40, the uniformity is the best, UBCF = 1:64, UICCF = 1:31, User-CT = 1:34, and User-CCIC = 1:47. Content-based recommendation algorithms originated from information retrieval. A cross-domain collaborative filtering scoring matrix generation model is used to establish the correlation between multiple scoring matrices by finding a shared implicit cluster scoring matrix, which also improves the recommendation effect and alleviates the scalability problem. When the MAF is 50, the efficiency is the highest, UBCF = 1:98, UICCF = 1:58, User-CT = 1:65, and User-CCIC = 1:91. Matrix factorization is also based on the idea of collaborative filtering and can achieve scoring prediction for any combination of users and items by means of inner product calculation. The matrix factorization algorithm has strong potential feature mining ability. The adaptive learning rate function, which combines the characteristics of exponential function and linear function, is applied to the SVD++ recommendation algorithm, which speeds up the convergence speed of the calculation and improves the accuracy and scalability of the recommenda-tion model. The stochastic gradient descent is applied to the SVD algorithm, which accelerates the convergence of the model and improves the performance of the model. Considering the interest changes caused by time factors in SVD, the accuracy of score prediction is effectively improved. Figure 8. By analyzing the information architecture of multiple local museum APPs in the software market, we can find out the commonalities and differences, sort out the main display contents of the APP, and conclude that the information architecture design of the local museum APP should reflect in terms of understanding information, obtaining information, enhancing information and sharing information, the main content of the museum APP is integrated on this basis, and each functional division is classified into a detailed classification to show the reasonable logic and clear structure of the APP. This forms four main modules: Discovery Museum, Reading Museum, Experience Museum, and Sharing Museum. Among them, the "Discovery Museum" is mainly divided into the museum introduction interface, the museum positioning, and the navigation function interface. The "Reading Museum" part is mainly for the display interface and introduction interface of each exhibition hall in the museum; the "Experience Museum" module mainly provides interactive links for users, such as scanning QR code to obtain cultural relic information interface and interactive game interface; "Sharing Museum" is mainly the user's personal center interface, which is used to obtain personal collections, concerns, personal information, and other interfaces on the personal homepage, as well as feedback to the museum.
Conclusion
Most museums pay attention to the external shape in the display design and lack in-depth research and sorting of cultural relics. The display is only the display and placement of In this traditional single display, it is difficult to improve the visitors' interest in visiting. Very traditional concept and design, whether it is the display of objects or the setting of signs, they are very single and rigid, and the whole visiting process is boring. The recommendation display design simplifies the regression function of collaborative filtering, improves the recommendation effect, and alleviates the scalability problem. Through this paper, it is concluded that (1) the model tends to improve the existing offline single-action recommendation methods so that it has the ability to make recommendations using streaming data. The idea of collaborative filtering becomes a highly interpretable technique to incrementally learn users' recent preferences using data streams. In the SIM_COS algorithm, CCLU 11 18 15 16 18 16 BKNN 14 17 20 15 11 10 MKNN 16 10 14 20 19 15 SVD 14 11 16 15 12 13 SVD++ 17 16 15 15 18 10 SLPE 19 10 13 19 14 19 NMF 13 13 10 17 16 13 OTML-MF 14 12 19 12 10 Based on the embedded feature and user behavior course recommendation algorithm, a deep learning network is introduced to fully consider the user feature and other features, and a course recommendation model based on the embedded feature and LSTM is established. When performing the nearest neighbor search for similar groups of users, there may be some small flaws in the effect of a single model. (4) The recommendation system algorithm improves the recommendation efficiency by combining the advantages of memory-based and model-based collaborative filtering by adopting smooth processing. When the MAF is 40, the uniformity is the best, UBCF = 1:64, UICCF = 1:31, User-CT = 1:34, and User-CCIC = 1:47. Content-based recommendation algorithms originated from information retrieval. A cross-domain collaborative filtering scoring matrix generation model is used to establish the correlation between multiple scoring matrices by finding a shared implicit cluster scoring matrix, which also improves the recommendation effect and alleviates the scalability problem. When the MAF is 50, the efficiency is the highest, UBCF = 1:98, UICCF = 1:58, User-CT = 1:65, and User-CCIC = 1:91. The adaptive learning rate function, which combines the characteristics of exponential function and linear function, is applied to the SVD++ recommendation algorithm, which speeds up the convergence speed of the calculation and improves the accuracy and scalability of the recommendation model. The stochastic gradient descent is applied to the SVD algorithm, which accelerates the convergence of the model and improves the performance of the model. Considering the interest changes caused by time factors in SVD, the accuracy of score prediction is effectively improved.
Data Availability
The experimental data used to support the findings of this study are available from the corresponding author upon request. | 2022-07-29T15:15:53.157Z | 2022-07-26T00:00:00.000 | {
"year": 2022,
"sha1": "1559fa371a4519164d8128215c6411704ec833dd",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/wcmc/2022/2814216.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "441aaa3e48f4ef566b318664382945eba6451187",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
6567965 | pes2o/s2orc | v3-fos-license | Oral microbe-host interactions: influence of β-glucans on gene expression of inflammatory cytokines and metabolome profile
Background The aim of this study was to evaluate the effects of β-glucan on the expression of inflammatory mediators and metabolomic profile of oral cells [keratinocytes (OBA-9) and fibroblasts (HGF-1) in a dual-chamber model] infected by Aggregatibacter actinomycetemcomitans. The periodontopathogen was applied and allowed to cross the top layer of cells (OBA-9) to reach the bottom layer of cells (HGF-1) and induce the synthesis of immune factors and cytokines in the host cells. β-glucan (10 μg/mL or 20 μg/mL) were added, and the transcriptional factors and metabolites produced were quantified in the remaining cell layers and supernatant. Results The relative expression of interleukin (IL)-1-α and IL-18 genes in HGF-1 decreased with 10 μg/mL or 20 μg/mL of β-glucan, where as the expression of PTGS-2 decreased only with 10 μg/mL. The expression of IL-1-α increased with 20 μg/mL and that of IL-18 increased with 10 μg/mL in OBA-9; the expression of BCL 2, EP 300, and PTGS-2 decreased with the higher dose of β-glucan. The production of the metabolite 4-aminobutyric acid presented lower concentrations under 20 μg/mL, whereas the concentrations of 2-deoxytetronic acid NIST and oxalic acid decreased at both concentrations used. Acetophenone, benzoic acid, and pinitol presented reduced concentrations only when treated with 10 μg/mL of β-glucan. Conclusions Treatment with β-glucans positively modulated the immune response and production of metabolites.
Immunomodulation
Background β-glucans from yeast have been used extensively as protective substances against infections with potent effects on the innate and adaptive immune responses. β-glucans are non-starch polysaccharides that make up structural cells of plants and microorganisms [1]. The cell wall of Saccharomyces cerevisiae is an important source of β-glucans and these represents about 50-60% of yeast [2]. The protective effect of these compounds has been demonstrated in experimental infection [3]. Additionally, there are reports that these substances modulate allergy symptoms [4] and have anticancer properties [5,6]. Many hypotheses have been put forward to explain the effects of β-glucans. Such compounds can act by inhibiting the adhesion of pathogens to epithelial tissues of the digestive tract by blocking carbohydrate-binding adhesins on bacteria; they stimulate the immunocompetent cells in Peyer's patches and the consecutive activation of mechanisms of innate and adaptive immune defense; further, by adsorption of mycotoxins in food (when linked to the diet) β-glucans inhibit their toxic activity [2].
However, its effects on periodontal inflammation are still poorly studied. Periodontal disease is a highly prevalent disease in the adult population. It is characterized by inflammation and progressive destruction of the periodontal tissues in response to specific microorganisms present in oral biofilm [7][8][9][10]. The pathogens associated with periodontal disease are frequently present in the human subgingival microbiota and are represented mainly by anaerobic gram-negative bacteria [11]. A. actinomycetemcomitans, Pasteurellaceae family, is a coccobacillus, fermentative, gram-negative, capnophilic, non-motile, and non-sporulating microorganism. This bacterium is considered the main etiological agent of localized aggressive periodontitis lesions, but is also associated with chronic periodontitis [12][13][14][15][16][17][18]. The progression of periodontal disease is associated with the virulence of the microorganism, together with the susceptibility of the host [19]. There are several virulence factors of A. actinomycetemcomitans that collaborate for its pathogenicity in periodontitis [20]. Leukotoxin, cytolethal distending toxins, bacteriocins, adhesins and lipopolysaccharide correspond to the variety of the microorganism virulence factors that may be associated with the pathogenesis of localized aggressive periodontitis [21]. These virulence factors attributed to A. actinomycetemcomitans are responsible for interacting with the host cells triggering an inflammatory response in the tissues supporting the teeth [22].
Fibroblasts and epithelial cells are the first cells to be activated in the oral cavity in response to exotoxic and endotoxic virulence factors of A. actinomycetemcomitans, performing an essential role in the production of cytokines involved in the inflammatory process. After this first local colonization, leukocytes (mainly monocytes and neutrophils) and dendritic cells are recruited to the site of infection giving sequence on inflammatory response [22,23].
Recently, in vivo studies have demonstrated that β-glucans from S. cerevisiae present regulatory activity toward metabolism [24] and also modulate the expression of cycloxygenase-2 (COX-2), receptor activator of nuclear factor kappa-B ligand (RANK-L), and osteoprotegerin (OPG), decreasing alveolar bone loss caused by induced periodontal disease (ligature) in normal and diabetic animals [25]. However, knowledge of the molecular and biochemical mechanisms involved in β-glucan activity in periodontal disease is still not understood, demanding further research with advanced tissue culture techniques, examining the microbiota-host interaction. In that sense, the dual chamber model is an interesting in vitro model that mimics the human periodontum. It is constructed using a monolayer of epithelial keratinocytes and a subepithelial layer of fibroblasts on which the invasive periodontopathogen can be applied [26].
Thus, this study aims to evaluate the effects of β-glucan on the expression of inflammatory mediators and the metabolomic profile of oral cells using a dual-chamber model of epithelial and subepithelial cells infected by A. actinomycetemcomitans.
β-Glucan
The β-glucan utilized was the glucan from baker's yeast S. cerevisiae (Sigma-Aldrich; St. Louis, MO), with a purity of 98%. Sterilized deionized water was used as the vehicle for β-glucan dilution.
Antimicrobial activity
As a preliminary step, the antimicrobial activity and cytotoxicity of β-glucan were tested in order to determine the subsequent doses in the dual-chamber model. Antimicrobial activity was evaluated in A. actinomicetemcomitans after 24 h of treatment. Microorganisms were inoculated (1 × 10 6 cfu/mLcolony-forming units per milliliter) in a 96-well microtiter plate with Trypticase Soy Broth (TSB; Becton Dickinson, Franklin Lakes, NJ) and β-glucan was immediately added in various concentrations (0 as control, and then subsequently from 1 μg/mL to 100 μg/mL) to determine the minimum inhibitory concentration (MIC) [30]. Microplates were maintained in a humidified incubator at 37°C and 5% CO 2 . Microplates were maintained in a humidified incubator at 37°C and 5% CO 2 . After 24 h, the contents of the wells were inoculated in Petri dishes with Trypticase Soy Agar (TSA; Becton Dickinson, Franklin Lakes, NJ) and incubated for 3 days. After this period, the cfu/mL was determined.
Cytotoxicity assay
The in vitro cytotoxic effect was measured by the fluorometric resazurin method [31]. OBA-9 or HGF-1cells, cultured in DMEN medium (Lonza,Walkersville, MD) with10% of Fetal Bovine Serum -FBS (Lonza, Walkersville, MD), were seeded (1 × 10 5 cells/mL) in a 96-well microtiter plate and maintained in a humidified incubator at 37°C and 5% CO 2 . After 24 h, cell morphology was observed under an inverted microscope (EVOS FL; Life Technologies, Carlbad, CA) to confirm their adherence to the wells and to note their morphological changes. β-glucan (1-100 μg/mL) was added to the cell culture and incubated at 37°C and 5% CO 2 . After 24 h, the medium was discarded, cells were washed with warm PBS (Lonza, Walkersville, MD), and replenished with fresh medium containing resazurin (Cell Titer Blue Viability Assay; Promega Corp, Madison, WI) [32]. Subsequently the plate was incubated at 37°C and 5% CO 2 .
After 4 h, the contents of the wells were transferred to a new microplate and the fluorescence was read in a microplate reader (SpectraMaxM5; Molecular Devices Sunnyvale, CA) with excitation at 550 nm, emission at 585 nm, and a cut off of 570 nm.
Dual-chamber assay
The immunological effects of β-glucan were investigated using a dual-chamber model to mimic the periodontum (Fig. 1). Transwell inserts (8 μm pore × 0.3 cm 2 of culture surface; Greiner Bio-One, Monroe, NC) were situated in a 24-well plate and OBA-9 cells (1 × 10 5 ) were seeded intranswell inserts. HGF-1cells (1 × 10 5 ) were seeded in the basal chamber. The plates were incubated at 37°C in humid air containing 5% CO 2 for 24 h. The trans-epithelial electric resistance (TEER) of each cell layer was measured with a Millicell-ERS volt-ohm meter (Millipore, Bedford, MA). Cell layer confluence in the Transwell insert was measured daily until optimal TEER was reached (>150 Ohm/cm 2 ) which was found on the second day, when the medium in the basal chamber and insert were replaced with new medium (DMEN) containing A. actinomicetemcomitans (1 × 10 6 cfu/mL). Medium containing the microorganism was added to the insert, passing through the upper layer of cells (OBA-9) and reaching the bottom cell layer (HGF-1) [26]. Immediately after inoculation of the dual-chamber with A. actinomicetemcomitans the β-glucan treatments (10 μg/mL or 20 μg/mL) were added and the plate was incubated at 37°C in humid air containing 5% CO 2 . The time of exposure of the microorganism to β-glucan was 24 h. Each experiment was repeated three times with two replicates per group (n = 6) and the experimental groups were divided as described in Table 1. The two doses used were determined from the results found in the antimicrobial activity and cytotoxicity assay.
Sample collection for analysis
After the treatment period, the liquid contents of the wells were collected and centrifuged at 1200 rpm for 10 min. Following centrifugation, the supernatant was stored at −80°C for subsequent metabolomic analysis. The remaining cell layer on the surface of the inserts and of the plate wells were used for RNA isolation (OBA-9 and HGF-1 separately) for gene analysis in quantitative real-timePCR.
Gene expression -quantitative real-time PCR Total RNA was isolated according to the Qiagen RNea-syMini Kit Protocol (Qiagen; Valencia CA). Purity and quantity of RNA were measured in a NanoPhotometer P360 (Implen; Westlake Village). Total RNA was converted into single-stranded cDNA using a high-capacity reverse transcription kit (QuantiTect Reverse Transcription Kit; Qiagen; Valencia, CA). From the cDNA obtained, an array for evaluation of gene expression of inflammatory response by quantitative real-time PCR (Prime PCR Pathway Plate/Acute Inflammation Response; Bio-Rad, Hercules, CA), was performed. Based on the results of the array, five genes/primers were selected for detailed study: IL-1-α, IL-18, B-cell lymphoma-2 (BCL-2), E1A Binding Protein (EP300) and prostaglandin-endoperoxidesynthase-2 (PTGS-2) (Quan-tiTect Primer Assay -Qiagen; Valencia, CA). For the selected primers, QuantiTect SYBR Green PCR Kits (Qiagen;Valencia, CA) were used. The reaction product was quantified by relative quantification using GAPDH as a reference gene. Data from standard threshold cycle (TC) of the equipment in real time (CFX Connect-Bio-Rad; Hercules, CA) were calculated and interpreted using the scan tool data qPCR array. Analysis of the relative quantitation was done using the ΔΔ Ct comparative method [33].
Metabolome analysis
The cell culture supernatant contents of the wells were collected and centrifuged at 1200 rpm for 10 min, at room temperature. Then, the supernatant was properly stored and sent for analysis at West Coast Metabolomics Center (UC Davis Genome Center; Davis, CA) for subsequent metabolomic analysis. The metabolites were separated by gas chromatography/mass spectrometry (Agilent 6890, Santa Clara, CA/Leco Pegasus IV, St. Joseph, MI) according to standard methodology. The metabolites found were submitted to a comparison software and compared with a standard library of metabolites. Subsequently the data were submitted to statistical analysis (West Coast Metabolomics Centre (UC Davis Genome Center; Davis, CA) [34].
Statistical analysis
Statistical analyses were done using analysis of variance (ANOVA). When F values indicated significant interactions, these were unfolded between factors. The analyses were performed in the statistical program SISVAR [35] at a significance level of α = 0.05.
Results
The antibacterial activity of β-glucan started at 10 μg/ mL. Cytotoxicity assays were conducted in HGF-1 and OBA-9 cells and the results are shown in Fig. 2. The first concentration used was 10 μg/mL of β-glucan and resulted in 125% viability for HGF-1 and 104% for OBA-9 cells (Fig. 2a). The second concentration used was 20 μg/mL of β-glucan and resulted in 100% viability for HGF-1 and 90% for OBA-9 cells (Fig. 2b).
Quantitative real-time PCR results are presented in Figs. 3 and 4. Based on gene expression results of the inflammatory profile (acute inflammation response), five genes that showed greater variation in their expression (up or down regulation) were selected for detailed analysis: IL-1-α, IL-18, BCL-2, EP-300, and PTGS-2. The relative expression of IL-1-α (Fig. 3a) and IL-18 (Fig. 3b) genes in HGF-1 decreased with 10 μg/mL or 20 μg/mL of β-glucan in comparison with the control group (p < 0.05). In the same way, the expression of PTGS-2 (Fig. 3e) decreased with 10 μg/mL treatment; however, at a dose of 20 μg/mL it remained equal to that of the control group (p < 0.05). The expression of the BCL-2 (Fig. 3c) and EP-300 (Fig. 3d) genes were similar among groups.
Discussion
Human gingival fibroblasts represent the main cell type that form the soft connective tissues of the periodontium. These cells have a direct interaction with bacteria and their products [36], and perform an essential role in the production of cytokines involved during the inflammatory process [23]. The β-glucans present a capacity to stimulate the production of proinflammatory cytokines, thus modulating immune responses both specific and non-specific. Here, the authors extend further on their previous in vivo discovery [25] by showing the effects of β-glucans on gene expression of inflammatory cytokines and the metabolomic profile of mammalian cells. For this study, the toxicity, anti-inflammatory activity, and effects on the transcriptome/metabolome of β-glucans on human cells were evaluated. The gene expression of IL-1-α and IL-18 in fibroblasts was reduced in the models treated with β-glucans. IL-1 is considered as a marker of periodontitis due to their involvement in the inflammation process (as inflammatory mediator) and its participation in the extracellular matrix and bone metabolism [37,38]. In a study of experimental gingivitis, an increased concentration of IL-1 in gingival crevicular fluid was demonstrated [39]. The expression of IL-1-α and IL-1-β was induced in vitro from cultured gingival epithelial cells that were challenged with A. actinomicetemcomitans extracts [40]. These results indicate that gingival epithelial cells are the main source of these interleukins of the periodontium, which induce the production of additional inflammatory mediators [40]. IL-18 has pleiotropic action and participates in the innate and acquired immune responses [41], indicating a positive effect of β-glucan in reducing the expression of both IL-1-α and IL-18 in human fibroblasts. The decrease in these parameters may suggest an improvement in the inflammatory response associated with the immunomodulatory effects of β-glucans associated with their antimicrobial activity [3,[42][43][44]. Antagonistically, the expression of these same cytokines (IL-1-α and IL-18) observed in keratinocytes (OBA-9), indicated a result contrary to that seen in fibroblasts (HGF-1). Treatment with β-glucan increased the expression of IL-1-α and IL-18. This response may be due to a compensatory interaction between these different cell types. According to Di et al. [45], the expression of KGF (keratinocyte growth factor) and KGFR (keratinocyte growth factor receptor) observed in cocultures of keratinocytes and fibroblasts was influenced by the interaction of these different gingival cells. According to these authors, keratinocytes and fibroblasts can interact to dynamically . Dual-chamber model inoculated with A. actinomicetemcomitans and treated with different doses of β-glucan. The control group has their mean expressed equal to 1 and treated groups have their mean relative to the control group. The results were expressed by mean followed standard deviation; n = 6 and P < 0.05 . Dual-chamber model inoculated with A. actinomicetemcomitans and treated with different doses of β-glucan. The control group has their mean expressed equal to 1 and treated groups have their mean relative to the control group. The results were expressed by mean followed standard deviation; n = 6 and P < 0.05 regulate gene expression, what could have had such an effect on gingival cells conditions after treatment. In addition, the use of β-glucan decreased BCL-2 expression in keratinocytes. This protein exerts an antiapoptotic function, performing an essential role in the development of the immune response and tissue homeostasis [46]. β-glucan therapy regulated the expression of other immunomodulatory genes (EP300 and PTGS2), which shows an effect on more than one signaling pathway and can result in an important therapeutic effect. EP300, also known as p300, is involved in cell growth, proliferation, apoptosis, and embryogenesis [47,48]. Some changes in its structure (derived from mutations) and the altered activity of this protein are linked with inflammation, malignant tumors, and developmental abnormalities [48]. Deng et al. [49] observed that p300 is involved in the stimulation of COX-2 expression induced by proinflammatory mediators. In the current study, treatment with β-glucan reduced EP300 expression in keratinocytes.
PTGS2, also known as COX-2, is an enzyme that is involved in the conversion of arachidonic acid to prostaglandins, performing an important role in the inflammatory response of periodontal tissues [50]. This enzyme has a preferentially inducible profile and is expressed by cells related to inflammatory processes [51] such as the response to inoculation by pathogenic microorganisms. A recent study performed by our research group demonstrated lower COX-2 expression in diabetic rats with induced periodontal disease that were treated with β-glucan from S. cerevisiae [25]. Similarly, the present study showed a reduction in PTGS-2 expression, suggesting an improvement in the inflammatory profile as a function of treatment with β-glucan.
The metabolomic study in the present work explored the influence of β-glucan treatment on cell metabolic profile and found significant changes in 4-aminobutyric acid, 2-deoxytetronic acid NIST, oxalic acid, acetophenone NIST, benzoic acid, and pinitol. 4-aminobutyric acid, more commonly known as gamma-aminobutyric acid (GABA), is a non-protein amino acid that acts as the main inhibitory neurotransmitter of the central nervous system in animals and humans [52]. Some studies The data is expressed in relative peak heights (mAU) from HPLC-MS analysis, which are unit-less (mean followed standard deviation); n = 4 and P < 0.05 have linked increased intake of GABA or its analogs with multiple health benefits, for example, lowering blood pressure in hypertensive animals and humans [53][54][55][56]. In addition, studies indicate that GABA ingestion from enriched natural sources, has an inhibitory effect on the proliferation of cancer cells and has a enhancer action on cancer cell apoptosis [57]. Other compounds, such as benzoic acid and pinitol, are derived from plants and have multifunctional properties. Benzoic acid is an aromatic carboxylic acid present in the tissues of plants and animals and can also be produced by microorganisms [58]. Pinitol, also called D-pinitol, is a compound with multifunctional properties, among them, anti-inflammatory, cardioprotective, and antihyperlipidemicactions. Furthermore, pinitol is known to have properties similar to those of insulin [59][60][61].
A study compared the metabolomic profile of patients with different levels of gingival bleeding. Metabolomic analysis of this study indicated significant changes in the composition of metabolites, especially the short chain carboxylic acids propionate and n-butyrate, which tracked clinical changes in gingivitis severity [62]. Another study analyzed the metabolomic profile in saliva and plasma samples of diabetic patients with healthy periodontium, gingivitis and periodontitis. They observed increased levels of markers of cellular energetic stress, increased purine degradation and glutathione metabolism through increased levels of oxidized glutathione and cysteine-glutathione disulfide, markers of oxidative stress (guanosine and inosine), increased amino acid levels suggesting protein degradation, and increased ω-3 (docosapentaenoate) and ω-6 fatty acid (linoleate and arachidonate). According to the authors, these metabolites associated with the periodontal condition may be useful for developing diagnoses and therapeutics adapted to the diabetic population [63]. Thus, we believe that metabolomic profile analysis may be a useful tool in investigating of the β-glucans action on periodontal disease and the changes in metabolites can be used as markers of the disease.
The results observed in the present study demonstrated that the β-glucan was able to modulate gene expression and alter the concentrations of different metabolites by modifying the immune cell response to a challenge with A. actinomicetemcomitans. β-glucan treatment (10 μg/mL or 20 μg/mL) reduced the concentrations of 4-aminobutyric acid, 2-deoxytetronic acid NIST, oxalic acid, acetophenone NIST, benzoic acid, and pinitol. In fibroblasts (HGF-1), the relative expression of IL-1-α, IL-18, and PTGS-2 genes decreased with 10 μg/mL or 20 μg/mL of β-glucan. In keratinocytes (OBA-9), the expression of BCL-2, EP-300, and PTGS-2 decreased with the higher dose of β-glucan. Such genes are considered a marker for many dysfunctions, such as periodontal disease, due to their functions as inflammatory mediators. The modulation of gene expression these markers may indicate an improvement in inflammatory profile and a possible reduction in microbial activity.
Conclusions
Treatment with β-glucans from Saccharomyces cerevisiae administered for 24 h in a dual-chamber model positively modulated the immune response and metabolites production. | 2017-08-03T01:33:44.050Z | 2017-03-07T00:00:00.000 | {
"year": 2017,
"sha1": "30c1e04e2df1eb2e9026efd423cb05f16dc27825",
"oa_license": "CCBY",
"oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/s12866-017-0946-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a9fcdef3cd00651dfaa3aa782a67cc66c7d3f0f",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
129100773 | pes2o/s2orc | v3-fos-license | From “ naked country ” to “ sheltering ice ” : Rudy Wiebe ’ s revisionist treatment of John Franklin ’ s first arctic narrative
Rudy Wiebe's A Discovery of Strangers (1994) offers a revisionist construction of Franklin's first expedition to find the North-West Passage, one that attempts to show the disparate views of the landscape held by the British explorers and the Yellowknife of the Coppermine region-one of the Dene peoples-and to sound a warning about the devastating effects of the arrogant will to dominate the environment. True to the conventions of historical fiction, Wiebe, makes Franklin, himself, a largely peripheral figure, choosing to focus on lesser known participants in the events of 1821.
Rudy Wiebe's revisionist treatment of John Franklin's first arctic narrative
Over the last thirty-five years, the body of Canadian historical fiction has grown tremendously and the genre enjoys great popularity.Herb Wyile claims that "[t]he notion that historical discourse is essentially speculative rather than mimetic has certainly given novelists the elbow room to develop their own speculative fictions" (13): works that are self-reflexive and that complicate their relationship to sources, works that point up the fact that the past is envisioned through the context of the present.Rudy Wiebe's A Discovery of Strangers (1994) offers a revisionist construction of Franklin's first expedition to find the North-West Passage, one that attempts to show the disparate views of the landscape held by the British explorers and the Yellowknife of the Coppermine region-one of the Dene peoples-and to sound a warning about the devastating effects of the arrogant will to dominate the environment.True to the conventions of historical fiction, Wiebe, makes Franklin, himself, a largely peripheral figure, choosing to focus on lesser known participants in the events of 1821.In keeping with the Canadian tradition of historiographic metafiction, Wiebe engages in the dialogic interweaving of sections of published accounts of the trip and his own narrative constructions.By inserting excerpts from the journals of Franklin, of expedition doctor John Richardson, of George Back and Robert Hood-both midshipmen and artistsinto a novel that both examines the contact between British and Native North American cultures and portrays a strong and vital Native community, Wiebe dismantles the hierarchy and authority that Franklin and his officers wished to establish in the interests of the Empire.The efforts of Wiebe-a well-established white writer-to change non-Native readers' perceptions of the far north as a desolate and barren place are clear in his depiction of the "sheltering ice" (11) and the "life-giving cold" (317), as well as in his elevation of non-human animals in imaginative importance and his representation of Dene society.
Given that Franklin´s third expedition inspired so many search parties and so many books speculating on the fate of its members, Wiebe's decision to write a fictional recreation of the first voyage is a key one.Christy Collis writes of Franklin's final and fatal 1845 expedition and "the continuing need" on the part of "scientists, archaeologists, Canadian Armed Forces regiments, novelists, poets, literary critics, and cartographers" to "close this lost narrative of the North," but Wiebe's choice speaks of a desire to pursue the question not of what happened to Franklin, but of what happened to the Tetsot'ine (the Yellowknife), a people that had largely disappeared by the end of the nineteenth century.While the narrative remains within its purported nineteenthcentury period, the connections to the general legacy of colonization for the First Peoples are clear.
Tales of danger, hardship, and heroism were part of the attraction of exploration narratives in 1823, and Franklin, who came to be called "the man who ate his boots," was seen as a hero, and the published account of exploration, starvation, hypothermia, murder, and possible cannibalism was a bestseller.While subsequent assessments of Franklin have called his leadership decisions and expedition planning into question, Wiebe looks to the greater issues of cultural perception.While Franklin was certainly not responsible for the conflicts between the two furtrading companies (the Hudson's Bay Company and the North West Company) and the fact that the promises of aid from these companies were often not fulfilled, Richard Davis asserts that the expedition was, in fact, "an ill-prepared undertaking," and Franklin arguably brought some of the hardships upon his party because of his inflexibility and by his not giving proper credence to the advice and warnings of the First Nations people from whom he sought assistance but to whom he believed himself culturally and technologically superior.
In June 1821, the expedition set out for the Coppermine River and the Polar Sea.The inadequacy of birch-bark canoes in the rocky and icy terrain soon became evident, and after turning back at Point Turnagain in August, Franklin opted to return to Fort Enterprise via the Hood River, and the group ended up walking across the uncharted country of the Barren Lands.The canoes were broken by the time they reached the Coppermine River, and it was only thanks to one of the interpreters, St Germain, who fashioned a canoe using willow branches and canvas, that they were able to make the crossing.It is striking how after Franklin's account of days spent walking, charting, and naming landscape features after Britons of note (such as members of the Admiralty and his own mentor Matthew Flinders) in August 1821, Franklin's focus suddenly narrows to a recording of the availability of food and fuel.Reduced to eating tripe de roche-lichen-and bits of old leather, the men's hunger, growing debility and flagging hope are the main concerns of the narrative of the journey across the Barrens, which Franklin calls "this naked country" (393).The following entry, dated Sept. 3, 1821, clearly illustrates their privation: As we had nothing to eat, and were destitute of the means of making a fire, we remained in our beds all the day; but the covering of our blankets was insufficient to prevent us from feeling the severity of the frost, and suffering inconvenience from the drifting of the snow into our tents.There was no abatement of the storm next day; our tents were completely frozen, and the snow had drifted around them to a depth of three feet, and even in the inside there was a covering of several inches on our blankets.Our suffering from cold, in a comfortless canvass tent in such weather, with the temperature at 20 • , and without fire, will be easily be imagined; it was, however, less than that which we felt from hunger.(401) The group splinters as some are unable to continue and others are sent to find lost men or to seek help.Hood is shot, apparently by Michel Terohaute, a Mohawk voyageur whom Richardson in his journal accuses of cannibalism as well as murder and treacheryand whom the doctor takes it upon himself to execute.Torn by gales, frozen on a treacherous and shelterless expanse, weakened physically and mentally by starvation, Franklin and his remaining men become hardened survivors of a what is represented as a barren and hostile land.They are saved only by the fact that when they return to Fort Enterprise to find it deserted, Back sets out to find members of the Dene community that had helped them and returns with aid and food.All but five of the group of thirteen voyageurs and interpreters perish.Captain Robert Hood is the only officer to die.Astoundingly, Franklin still maintains a degree of cultural superiority as he writes of the care provided by the Dene: "The Indians prepared our encampment, cooked for us, and fed us as if we had been children; evincing humanity that would have done honour to the most civilized people" (471).
The title of Wiebe´s novel in its ambiguity marks the beginning of the revisionist process.As critics such as Wyile point out, The programmed response is to read the "discovery" as the customary, active pursuit of the explorer.Here, however, the use of 'strangers' suggests that it is the explorers themselves who are discovered, in this case by the Tetsot'ine or Dene.The title's inversion of the trajectory of colonial exploration mirrors the general inversion of the narrative as a whole, which focuses largely on the reaction of the Testsot'ine to the coming of the whites.( 38) The Dene culture is one that privileges memory, local knowledge, and a healthy respect for the land, and the main Dene characters in the novel repeatedly point up the practices and traits of the British that will disrupt the delicate balance of life in the north, bringing death and disease-in the short term for the expedition and in the long term for the Dene.
Wiebe's first chapter from the point of view of "The Animals in This Country" (the title of the chapter), the caribou and wolf, in particular, gives primacy to the non-human animals.The chapter title also echoes Margaret Atwood's poem entitled "The Animals in That Country" in which Atwood distinguishes between animals in "that country" and "this country."The distinction reflects the imaginative importance of animals in what might be termed the "Old World" legends and mythologies and the relative anonymity of non-human animals in a place like Canada.Atwood writes, In that country the animals have the faces of people: the ceremonial cats possessing the streets the fox run politely to earth… In this country the animals have the faces of animals.
Their eyes flash once in car headlights and are gone.
Their deaths are not elegant.
They have the faces of no-one.
Wiebe reminds readers that the animals in this country, the country of his story are unconcerned with the perceptions of white explorers: they have their own cycles of life and story into which they allow the Native peoples some entry and import.The following excerpt from the opening story of a caribou cow and her yearling calf on the journey to the calving grounds shows an interrelationship of human and non-human animals far from the hierarchical one established in Genesis or the source of imaginative significance in Atwood's poem: The caribou cow with three tines on each of her antlers lay curled, bedded, and at momentary rest with her calf in the lee of her body.She had once been a woman; in fact, she has already been born a woman twice.But she has never liked that very much, and each time she is born that way she lives human only until her dreams are strong enough to call her innumerable caribou family, and they come for her.(3) The opening paragraph of the novel begins with the non-human animals and anticipates the end of the British expedition and the failure of its members to adapt to the landscape: The land is so long, and the people travelling in it so few, the curious animals barely notice them from one lifetime to the next.The human beings whose name is Tetsot'ine live here with great care, their feet travelling year after year those paths where the animals can easily avoid them if they want to, or follow, or circle back ahead to watch them with little danger.Therefore, when the first one or two Whites appeared in this country, an animal would have had to search for four lifetimes to find them being paddled about, or walking, or bent and staggering, somewhere on the inexorable land.(1) The Dene live "with great care" in balance with the landscape and the seasonal movements of the caribou.The land is its own force, one that demands respect, and as the adjective "inexorable" suggests, one that will not yield to, or indulge the imperialist demands, expectations, or control of a few.The intrusion or invasion of the Whites-the strangers here-disrupts the balance of Tetsot'ine society.The caribou are only a natural resource to be exploited and over-hunted, for the British have no connection to them through story and tradition.Their method of killing-with guns and from a distance-when adopted by the Tetsot'íne begins to sever the connection between hunter and the animal that the Tetsot'ine say gives its life.
Wiebe´s portrait of past events within this contact zone is very much coloured by his research and the scholarship on the place of exploration literature within imperialist discourses.The list of acknowledgements following the narrative speaks volumes about the research in which Wiebe engaged and about his desire to speculate not just on the Dene peoples' lives but on their interpretation of British culture.The influence of source texts, such as Kerry Abel's Drum Songs: Glimpses of Dene History is felt throughout Wiebe's portrayal of the Tetsot'ine.Abel asserts that "Dene morality and philosophy were … highly practical …" As Abel explains it, One's goal was to make life as comfortable as possible while minimizing one's demands on others.The Dene did not share the Christian belief in the sinful nature of humankind.Nor did they separate the universe into sacred and profane or natural and supernatural spheres.People were meant to live as a part of the universe and not to attempt to dominate over it or to change it.In a complex system of interrelationships in the universe, the Dene found a sophisticated and practical means to deal with the problems of life.Flexibility, adaptability, individual initiative, and social responsibility were interwoven in a society that coped remarkably successfully in an environment that outsiders were later to describe as hostile and barren.(42) These qualities and abilities inform the construction of Keskarrah, Birdseye, Greenstockings, and Greywing, the tightly-knit Dene family who provide many of the insights into Dene culture and the critical views of the British expedition.Diametrically opposed to the Dene way of being in the world in Wiebe's novel are the social and political hierarchies of the expedition's members, the enormous demands the officers placed on the Dene-and on the natural resources of the area-and the relative resistance to the changes in practice that might have kept more of the men alive.A key difference between cultural groups lies in the perception of the landscape and humans' place within the delicate balance of the environment.
Keskarrah, who was a guide and map-maker for the Franklin expedition, becomes one of the most vocal critics of the British in Wiebe's novel.He expresses his distrust of and contempt for the officers' actions and arrogance in a number of areas, including the naming and mapping of the land.In relation to Inuit mapping, Renée Fossett has explained, "European observers [often] failed to understand the link between on-the-ground representations and verbal instruction and the primacy of the oral component" in Inuit mapping.As well, "[d]istance was indicated by means that did not accord with European linear measures; travel time was taken into account (119).Keskarrah scoffs at the arbitrariness of the names the officers choose, saying: "These English.Who also tried to name every lake and river with whatever sound slips from their mouths: Singing Lake and Aurora and Grizzle Bear and Snare lakes…-it is truly difficult for a few men who glance at it once to name an entire country" ( 22).The importance of local knowledge and story are evident in the following passage: Of course, every place already was its true and exact name.Birdseye and Keskarrah between them knew the land, each name a story complete in their heads.Keskarrah could see, there, in the shape and turn of an eddy, the broken brush at the last edge of the trees, the rocks of every place where he waited for caribou, or had been given to know and dream; and Birdseye had walked everywhere-under packs, or paddled, following or leading him, looking at each place where the fell of soft caribou and thick marten or fox turned continually into clothing for People in her hands: in their lifetime of ceaseless travel and thought, the way any Tetsot'ine must if they would live the life of this land.( 24) Of the officers' insistence on traveling the land without adequate knowledge or supplies and in spite of warnings, Keskarrah thinks, "It seemed they had heard only their one telling, as told to themselves" (15)."Whitemuds," he says, "hear only what they want to hear" (131).The Dene elder also cannot fathom why they endeavour to fix with their instruments and records that which is always changing.
European cartography has been discussed by many critics in terms of the charting, claiming, and possessing of lands previously unmapped-by Europeans.As Graham Huggan has argued "cartography symbolized the colonial desire for a systematic organization of space grounded in a mimetic, logocentric relation between the map and the mapped…[P]ostcolonial texts deploy cartographic tropes to expose and deconstruct the imposition of colonial perception" (Wyile 40).The notion of the land as palimpsest is common in the works by Canadian authors such as Wiebe, Robert Kroetsch, and Al Purdy, and A Discovery of Strangers explores alternative and equally valid methods of mapping space.
Robert Hood the "primary surveyor and draughtsman" (Houston) on Franklin's first expedition is one of two members of the expedition whom Wiebe constructs with some openness to the Dene culture and some ability to see beyond the framework of their social and military-and in Hood's case artistic-training.(The other is John Hepburn.)Both artistic representations of the landscape and maps of the topography were integral parts of the report on this search for a North-West Passage.Ian MacLaren, one of the scholars Wiebe acknowledges, has written extensively on how British landscape aesthetics shaped the views of the northern landscape, as the explorers looked to frame their written and pictorial descriptions within the conventions of the picturesque.The officers were on occasion surprised to find a scene worthy of William Gilpin's attention, such as that of Wilberforce Falls on the Hood River in what is now Nunavut.Franklin describes "[t]he river [that] precipitates itself into it over a rock, forming two magnificent and picturesque falls close to each other" and names the "magnificent cascades" Wilberforce Falls "as a tribute of [his] respect for that distinguished philanthropist and Christian" (398).In this instance, Franklin is able to have both artists frame the scene for the British reading and viewing public in a conventional style and to distinguish it with a revered Christian name.
However, a letter from Richardson to Back (spring 1821) expresses the more common disappointment in the landscape and its failure to inspire aesthetic praise: [William] Gilpin himself, that celebrated picturesque hunter, would have made a fruitless journey had he come with us.We followed the lakes and low grounds, which, after leaving Martin Lake, were so deeply covered with snow that it was impossible to distinguish lake from moor… The only variety that we had was in crossing two extensive ridges of land which lie at the distance of seven or eight miles from each other, and nearly half way to the river… nowhere did I see anything worthy of your pencil.So much for the country.It is a barren subject, and deserves to be thus briefly dismissed.(qtd. in Maclaren 78) Richardson may be writing of a specific region, but in general, according to MacLaren, "where the picturesque cannot be found, [Richardson's] interest in the landscape wanes"; however, MacLaren asserts that letter "shows more than disgust with the land: the confusion over topographical distinctions between water and land demonstrates that not only is his taxonomy of landscape description inappropriate, but his mode of perceiving the external world is unavailing" (78).
Hood's growing awareness in Wiebe's novel of the limitations of the aesthetic frames of reference extends from landscape to people.The following passage depicts the artist's frustration and implies a subtle transformation: In the last canoe, Robert Hood had been trying all morning to capture once more, on a small piece of paper, a coherent quadrant of the world through which he was being carried.But even after an exhausting year of continuously widening vistas, he was tempted to look sideways, tugged towards a periphery in the corner of his eye, that, when he yielded, was still never there.Riding motionless in the canoe on this usual lake, he felt his body slowly tighten, twist; as if it were forming into a gradual spiral that might turn his head off at the neck.Like one of those pathetic little trees, enduring forever a relentless side wind so that it could only twist itself upwards year after year by eighth-of-an-inching; or like the owl in the story that turned its head in a circle, staring with intense fixity, trying to discover all around itself that perfect sphere of unbordered sameness and, at the moment of discovery that the continuous world was, nevertheless, not at all or anywhere ever the same, it had completed its own strangulation. . . .But his sketch must stop, must have frame!… [A] tall tree on either side-that was still a possible frame, if he drew them foreground enough.
But he had drawn that so often!Scribbling in trees where none could exist… (61-62) Through the use of simile, Wiebe connects Hood to the native trees (perhaps a nod to Al Purdy's poem "Trees at the Arctic Circle") and to the owl that recognizes the ever-changing nature of the world only as it dies.Hood is no longer satisfied with his conventional manufacturing of landscapes.Soon after, Hood is at a loss about how to depict a scene of Dene grief: "It seemed to him he was praying-for a revelation.How could they have existed here ages before they were known of?How would he draw a sorrow he could barely hear" (68).This culture is beyond Hood's ken and his representational abilities.
But it is Hood's desire to sketch Greenstockings-Keskarrah and Birdseye's daughter-that draws him into their "endlessly warm lodge" (227) and into the circle of Greenstockings' "everlasting arms" (230).Greenstockings is as critical as her father is of the expedition members as she sees first hand the danger of too many men and a lack of balance in her world; however, she embraces Hood and later bears their child.Their happiness is brief and Hood's parallel experience in Greenstockings' lodge and an expedition tent on the Barrens take on allegorical dimensions.
In Greenstockings lodge, the lovers feed each other and Greenstockings sings a traditional song celebrating the caribou who eat the lichen and in turn nourish the Dene.Later in a tent that Wiebe deems "the labyrinth of their disaster" (220), Hepburn feeds Hood "the horrible tripe de roche [lichen] that scours [his] mouth and throat bloody" (220).In the lodge, Greenstockings tells Hood that starving people may depend on ravens and "the compassionate wolves," their "sisters and brothers" to guide them to food (173).In spite of the language barrier, Hood understands her on some level, for his thoughts echo her words during in his starvation-induced delirium on the Barrens.Nevertheless, his debilitated state prevents him from acting on her advice-and he is forced to eat what Michel offers as wolf meat but which may be human.Whether the meat be wolf or human, the act of eating is an act of cannibalism.
Ultimately, Hood is neither strong enough nor influential enough to effect any change in Franklin's plans or their dealing with the Dene.His death in the novel and the extremity of the expedition's circumstances appear self-inflicted, as the parodic inversions of the early scenes of harmony and wisdom result from the failure to adapt to and respect the land and its Native peoples.Communion becomes cannibalism.Hood becomes the owl who dies at the moment of revelation.Historical events and the painful legacy of colonization in Canada do not allow Wiebe to indulge readers by giving them a happy ending.By fleshing out the character of Robert Hood, however, he does attempt to offer the possibility of a connection between British and Native cultures.Hood, though, dies on the Barrens.Hope, it seems, is always qualified or compromised or killed.
Wiebe's work of historiographic metafiction depicts the landscape and peoples of the arctic and sub-arctic regions forever changed by the "brutal hiss and clangour" (2) and the "strange and various sicknesses" (315) that are the destructive signs of European invasion.The final pages look to the period when George Back returns to map more of the country, when disease and war with the Dogrib people have killed so many of the Tetsot'ine.The Tetsot'ine way of life is irrevocably changed because "it is of course so much more manly and exciting to use guns to steal food and wives and clothing and dogs and territory from enemies than to work for them in the slow, considerate ways of the living land.… [S]ickness and the men's unrelenting aggression … destroy Greenstockings' People" (315-16).Wiebe offers his answer to the question of what happened to the Tetsot'ine, but this answer is tied to the on-going legacy of the will to dominate that still poses the greatest threat to the north. | 2017-09-07T10:04:42.262Z | 2008-02-01T00:00:00.000 | {
"year": 2008,
"sha1": "b4c8db67baff0a590bbf7988c91ba58a342ec9f5",
"oa_license": "CCBY",
"oa_url": "https://septentrio.uit.no/index.php/nordlit/article/download/1161/1104",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b4c8db67baff0a590bbf7988c91ba58a342ec9f5",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Geography"
]
} |
253841932 | pes2o/s2orc | v3-fos-license | The effect of home-based transcranial direct current stimulation in cognitive performance in fibromyalgia: A randomized, double-blind sham-controlled trial
Background Transcranial Direct Current Stimulation (tDCS) is a promising approach to improving fibromyalgia (FM) symptoms, including cognitive impairment. So, we evaluated the efficacy and safety of home-based tDCS in treating cognitive impairment. Besides, we explored if the severity of dysfunction of the Descendant Pain Modulation System (DPMS) predicts the tDCS effect and if its effect is linked to changes in neuroplasticity as measured by the brain-derived neurotrophic factor (BDNF). Methods This randomized, double-blind, parallel, sham-controlled clinical trial, single-center, included 36 women with FM, aged from 30 to 65 years old, assigned 2:1 to receive a-tDCS (n = 24) and s-tDCS (n = 12). The primary outcome was the Trail Making Test’s assessment of executive attention, divided attention, working memory (WM), and cognitive flexibility (TMT-B-A). The secondary outcomes were the Controlled Oral Word Association Test (COWAT), the WM by Digits subtest from the Wechsler Adult Intelligence Scale (WAIS-III), and quality of life. Twenty-minute daily sessions of home-based tDCS for 4 weeks (total of 20 sessions), 2 mA anodal-left (F3) and cathodal-right (F4) prefrontal stimulation with 35 cm2 carbon electrodes. Results GLM showed a main effect for treatment in the TMT-B-A [Wald χ2 = 6.176; Df = 1; P = 0.03]. The a-tDCS improved cognitive performance. The effect size estimated by Cohen’s d at treatment end in the TMT-B-A scores was large [–1.48, confidence interval (CI) 95% = –2.07 to–0.90]. Likewise, the a-tDCS effects compared to s-tDCS improved performance in the WM, verbal and phonemic fluency, and quality-of-life scale. The impact of a-tDCS on the cognitive tests was positively correlated with the reduction in serum BDNF from baseline to treatment end. Besides, the decrease in the serum BDNF was positively associated with improving the quality of life due to FM symptoms. Conclusion These findings revealed that daily treatment with a home-based tDCS device over l-DLPFC compared to sham stimulation over 4 weeks improved the cognitive impairment in FM. The a-tDCS at home was well-tolerated, underlining its potential as an alternative treatment for cognitive dysfunction. Besides, the a-tDCS effect is related to the severity of DPMS dysfunction and changes in neuroplasticity state. Clinical trial registration [www.ClinicalTrials.gov], identifier [NCT03843203].
Introduction
Fibromyalgia (FM) comprises widespread chronic pain and concurs with significant emotional distress associated with functional disability for daily activities. The symptoms linger or recur for at least 3 months without other conditions explaining the pain (Treede et al., 2015;Dueñas et al., 2016). The symptoms' severity scale of the American College of Rheumatology (ACR, 2016) diagnosis criteria included cognitive impairment as an element of core symptoms of FM (Montoro et al., 2015). Attention, perception, memory, executive functioning, and language abilities are essential components of cognition (Gellman and Rick Turner, 2013). The processing of cognitive components includes active decision-making, learning, and memory of past events (Hansen and Streltzer, 2005;Moriarty et al., 2011). This complex processing involves extensive cortical and subcortical neural circuitry responsible for perception, localization, processing, relaying, and pain modulation. Thus, the pain experience is modulated by affective-motivational and cognitive-evaluative components than being a purely sensory phenomenon (Tyng et al., 2017). Chronic pain syndromes, such as FM, have been linked to cognitive processing disturbance (Khera and Rangasamy, 2021).
There is evidence that pain and neurocognition have anatomical, biochemical, and molecular associations (Khera and Rangasamy, 2021). The frontal lobes control executive functions, particularly the orbitofrontal cortex, anterior cingulate cortex, and dorsolateral prefrontal cortex (DLPC) (Verdejo-García et al., 2009). The somatosensory cortex distinguishes between painful and non-painful sensations, whereas the medial thalamus and anterior cingulate cortex (ACC) record the stimuli as painful. The emotional component of pain perception and memory formation are both impacted by this encoding process, which is also linked to improved functional connectivity between the thalamus and the mPFC (Tseng et al., 2017). There is an overlap of brain structures involved in executive function and pain perception, and either cognitive impairment or chronic pain involves maladaptive neuroplasticity processes (Khera and Rangasamy, 2021). Higher executive functioning requires the ability to make emotional decisions (Tyng et al., 2017). They include executive function, learning, memory, sustained focus, processing speed, and psychomotor ability (Khera and Rangasamy, 2021). The cognitive impairment hinders interaction with the environment and generates difficulties with working memory (WM) (Miller et al., 2018). The WM is responsible for the temporary storage and manipulation of information necessary to perform complex tasks, such as language comprehension, learning, and reasoning (Cowan, 2014). It is essential to the adequate performance of complex behaviors. Hence, when it fails, so does the capacity to carry out daily living activities and the ability to elaborate pain confrontation strategies (D'Esposito and Postle, 2015). In FM, the core complaints related to cognitive impairment are mental confusion, concentration difficulties, and failing memory. This set of symptoms is often called "FibroFog" (Kravitz and Katz, 2015;Walitt et al., 2016;Bell et al., 2018). According to a recent study, FM patients performed less accurately on activities requiring split attention and attentional switching (Moore et al., 2019). Regardless of chronic pain impact on cognitive impairment in FM, it does not seem to correlate with other musculoskeletal or neuropathic pain (Grisart and Van der Linden, 2001;Verdejo-García et al., 2009).
Clinical and preclinical studies indicate a bidirectional link between cognition and chronic pain (Serrano et al., 2022). However, the targets of treatment of chronic pain comprise modulation of the central sensory processing either pain transmission [i.e., opioids and tricyclic antidepressants (TCAs) or neural excitability should be the therapeutic targets (i.e., opioids, anticonvulsants)]. However, multiple medicines are needed to treat FM symptoms; some might worsen cognitive impairment (i.e., opioids) (Ngian et al., 2011). Despite the modulation of the central sensory pain processing to be a treatment target, the pharmacological approaches might be ineffective in many patients (Schiltenwolf et al., 2014), and some of them can worsen cognitive performance, so the interest in non-pharmacological interventions. Among these interventions, transcranial direct current stimulation (tDCS) has demonstrated clinical benefits for complex chronic pain conditions, such as FM (Zortea et al., 2019). The main target to apply the anodal(a)-tDCS for pain is the primary motor cortex (M1), based on the rationale that it enhances the excitability of the sensory-discriminative networks (Zortea et al., 2019). Another potential target area to apply the a-tDCS is over DLPFC since it has been found to have beneficial effects on mood regulation, cognitive functions, and maladaptive emotional functioning (Dixon et al., 2017;Sankarasubramanian et al., 2017). Regarding the a-tDCS impact on FM, its use on the left-(l)-DLPFC revealed benefits on cognitive performance , and its use at home was effective in improving pain (Brietzke et al., 2020) and pain catastrophizing (Caumo et al., 2022).
The a-tDCS can modulate cortical and subcortical neural networks, inducing a top-down effect. Its effect on healthy controls (HC) demonstrates that a-tDCS over the l-DLPFC improved digit-span performance (Rottschy et al., 2012;Barbey et al., 2013). However, other studies found that it enhanced digitspan performance only if the stimulus had been paired with an online WM (n-back) task (Hill et al., 2019). Additionally, we showed that the alertness, orienting, and executive control attentional networks are all modulated by a single session of a-tDCS with 2 mA administered to the l-DLPFC in combination with a Go/No-go test (Silva et al., 2017). Besides, studies found that tDCS's impact on pain and cognitive function is neuroplasticity state-dependent, as indexed by the brain-derived-neurotrophic factor (BDNF) Brietzke et al., 2019;da Graca-Tarragó et al., 2019). In the same perspective, earlier studies found that serum BDNF is associated positively with the descending pain modulatory system malfunction (DPMS) (Caumo et al., 2016;Soldatelli et al., 2021) and that the BDNF likely mediates the a-tDCS effect in the improvement of DPMS (da Graca-Tarragó et al., 2019;Beltran Serrano et al., 2020). Hence, substantial evidence supports the critical role of BDNF in synaptic plasticity, learning, and memory (Kowianski et al., 2018), and a decrease of this neurotrophic factor in the hippocampus is related to the worst cognitive performance on memory tasks (Etnier et al., 2015). In this setting, it is reasonable to consider the BDNF as a neural plasticity marker involved in the tDCS effects, either on pain processing or cognitive functions (Cocco et al., 2018;Santos et al., 2018). Within this frame, more in-depth analyses of tDCS action are important to comprehending the molecular and neurophysiological mechanisms subtending tDCS effects on cognitive processes and DPMS dysfunction (Soldatelli et al., 2021). So, comprehension of its impact on the neuroplasticity processes, with the perspective to link them with clinical effectiveness, might help with better use of this technique.
Thus, we determine whether 20 sessions of a-tDCS on the left (l)-DLPFC and cathodal on the right (r)-DLPFC over 4 weeks self-applied at home would be superior to a sham-(s)-tDCS in improving the executive attention, divided attention, working memory, and cognitive flexibility assessed by the Trail Making Test (TMT-B-A) (primary outcome). Additionally, we evaluated its impacts on executive functioning (Controlled Oral Word Association Test; COWAT); the WM (Digits subtest from Wechsler Adult Intelligence Scale; WAIS-III); and quality of life (secondary outcomes). We investigated if the tDCS effects were related to the severity of the DPMS dysfunction at the start of the treatment and with neuroplasticity changes evaluated by the percent change in the BDNF from pre-to treatment end. We hypothesized that a-tDCS could improve cognitive performance more effectively than s-tDCS. Besides, we investigated whether these effects were correlated with the degree of pain processing pathway malfunction as measured by baseline DPMS deficit. We also investigated if the tDCS effects are mediated by changes in the neuroplasticity state, as indexed by serum BDNF.
Study design and eligibility
The trial's protocol was approved by the research ethics committee at the Hospital de Clinicas de Porto Alegre (HCPA), Brazil. Institutional Review Board IRB (36995020.3.0000.5327 CAAE registry) and Research Ethical Committee registration number 2017-0330. Each patient gave verbal and written consent to participate in this randomized, double-blind, shamcontrolled trial. No compensation was given to participants in exchange for their participation.
Inclusion and exclusion criteria
We included adult females ages 30-65 right-handed if they met the diagnostic criteria of fibromyalgia, according to the American College of Rheumatology (ACR, 2016). They were recruited through newspaper advertisements and recruitment from the outpatient pain clinic at HCPA. The FM diagnosis was confirmed by a Brazilian board-certified pain specialist. To be included, they need to be literate and report a score of at least six on the Numerical Pain Scale (NPS 0-10) on most days of the previous 3 months. Additionally, they should have consented to continue taking their medication during the study at the same doses used during the previous month starting the study. The exclusion criteria comprise the history of brain surgery, a tumor, a stroke, or the implantation of intracranial metal. Additionally, individuals were excluded if they had used illicit drugs during the previous 6 months or had an uncompensated clinical illness (i.e., ischemic heart disease, renal disease, hepatic disease, diabetes mellitus, hypertension, etc.). Rheumatoid arthritis, lupus, autoimmune disease, neurologic, oncologic disease, or COVID symptoms were additional exclusion criteria.
Sample size justification
Sample size estimation was based on a previous study that tested ten sessions of a-tDCS over the l-DLPFC in nondemented, ambulatory older adult patients on the Trail Making Test (TMT-B-A) (Manor et al., 2018). Our estimation was established using a 2-tailed test for a ratio of 2:1 (a-tDCS vs. s-tDCS on the DLPFC), a type I error of 5%, and a power of 80%. The standard deviation (SD) from the s-tDCS group was used as a reference to estimate the effect size (Manor et al., 2018). For an ES of large magnitude [(f ) equal to 1.02, considering a pooled standard deviation (SD) at treatment end equal to 34)], the estimated sample size was 30 patients. We included an additional 20% of subjects to account for possible dropouts. Thus, the final sample size was 36 patients (24 in the a-tDCS vs. 12 in the s-tDCS group).
Randomization
Thirty-six patients were randomized at an allocation of 2:1 to groups a-tDCS or s-tDCS, using random numbers created with the proper software. We employed randomization in three blocks of 12 patients to prevent the possible allocation prediction of the treatment group. Two investigators who were not involved in the patient assessments conducted the randomization before the recruitment stage. The envelopes containing the randomization number were prepared and according to the exterior numerical order, these envelopes were sealed and numbered in order. Research partners not involved in the trial, neither in contact with subjects nor evaluations, opened the envelopes and programmed the devices.
Blinding
Participants were uninformed of their therapy throughout the entire program (active or sham). Additionally, the allocation was unknown to the research team, the investigators who assisted with patient care, and the people who used the scales. The s-tDCS group's device was set up to provide 30 s of stimulation throughout the course of 20 min in the beginning, after 10 min, and after the stimulation. Each of these times, the device was set up to automatically switch on and off. By employing this strategy, we concealed the intervention for all research members until the treatment ended.
Intervention
The anode was placed on the l-DLPFC (F3) and the cathode at the r-DLPFC (F4) by the 10-20 system for EEG). The treatment was administered for five consecutive days over 4 weeks, totaling 20 sessions.
Participants received the programmed device to use at home. For the active a-tDCS, the current applied was 2 mA for 20 min Brietzke et al., 2020). For sham s-tDCS conditions, the montage was the same as active tDCS. A 30-s ramp-up in intensity from zero to 2 mA was used for a-tDCS and s-tDCS stimulation, as well as a ramp-down for about the same duration, as explained in the blinding session. Using two silicone cannulas attached to 35 cm 2 (5 × 7 cm) electrodes coated in sponges wet with saline solution, the current was supplied. The gadget was programmed by a single biomedical engineer to provide a set number of stimulation sessions, with a minimum gap of 16 h between each successive session. Details about the protocol can be seen in complementary material and a paper by Santos et al. (2021).
Treatment protocol with tDCS at home was established according to the standardized protocol described below: (1) Visit the facility, (2) Cap size and electrode placement, (3) Training, (4) Compliance with the protocol, proper application, and adverse effects.
(i) Volunteer who visited the lab as part of the methodology First Visit: Upon arriving at the laboratory, they provided their written, formal consent, confirmed the diagnosis, completed the sociodemographic questionnaire, underwent the cognitive test, and completed other baseline assessment procedures. They were also provided with information regarding the protocol they will follow. Second visit: The first 20 min of the treatment session were administered, which also included a training session on how to use the device at home. Third Visit: The patient returned the device to the lab after completing the assessment at the end of the treatment, which took place after 4 weeks of tDCS at home.
(ii) Size of the cap and electrodes' position, training session, protocol compliance, and adherence (a) Procedures to choose the size of the cap and electrodes' position: Following the measurement of the head circumference, the researcher selected the size of the cap from small (38 cm × 55 cm), medium (39 cm × 57.5 cm), and large (40 cm × 59 cm). The researcher then localized the electrode positions using the 10-20 system for EEG and placed electrodes in the F3 and F4 positions to deliver current to the scalp. The user cannot move the electrodes once they are inside the sponges, so an exact location of the electrode to provide the electric current during stimulation is assured. (b) Training session and instructions on how to self-apply the tDCS: After guiding the participants with the information from the tDCS use at the home manual and answering any questions, we conducted a face-to-face training session in the clinical research facility at HCPA in Porto Alegre, Brazil. -Patients might access the step-by-step procedure for selfadministration of tDCS at the following link (YouTube: https://youtu.be/3Wtji4esOGE). (c) Protocol compliance, appropriate use, and record of adverse effects during the sessions of tDCS at home: -Participants received instructions to pick a peaceful during the day to administer the therapy session. -One research team member remotely supervised the first session at home (the second overall). If the participant had questions or issues about the device, they could contact the research team via WhatsApp anytime. -The researcher in charge of getting in touch with patients did so once a week. -The tDCS device software recorded every session. -Additionally, participants were oriented to note any adverse effects in their diary immediately after the session. (d) Control of adherence: An engineer who was not involved in the patients' treatment oversaw downloading the data stored in the software during the treatment to maintain the study team's blinding. Such data include records of hour use, time of use, impedance, resistance, and the number of sessions. The timeline of the study is presented in Figure 1.
Instruments and assessments
Pain scales, psychological assessments, and psychophysical measurements were all performed by two evaluators unaware of the group assignment. The cognitive assessments were conducted by two trained psychologists. The tests were administered with auditory or paper stimuli and oral responses. All evaluators received a specific training, which followed a sequence of steps: (i) read and study manuals of each test; (ii) observe the administration of tests by an experienced examiner; (iii) practice it on volunteers in role-playing sessions, and (iv) discuss problems and questions with local experts if was needed. All assessments were performed in a quiet and private area without interruptions after patients received correct and clear instructions for the test in a slow speaking voice.
Outcomes
The primary outcome was executive function, defined as TMT Part B minus Part A.
a. TMT-Trail Making Test (TMT A-B): TMT A-B measures
working memory, executive attention, cognitive flexibility, split attention, and processing speed (Reitan and Wolfson, 1993;Lezak et al., 2004). The amount of time needed to finish the job and the number of mistakes affect how well you score. Lower performance is indicated by higher scores.
b. Digits subtest from Wechsler Adult Intelligence Scale (WAIS-III):
The Digit's subtest consists of eight series of digits presented aloud to the subject and asked to repeat in the same order (forward), and seven sequences that should be repeated in inverse order (backward), each series with a gradual increase in the number of digits (Nascimento, 2004;Wechsler, 2004). The Digits test assesses working memory.
Higher scores indicate better performance.
c. Controlled Oral Word Association Test (COWAT): It is
the verbal fluency exam that evaluates both linguistic and executive skills, including cognitive flexibility, strategy use, interference suppression, and reaction inhibition (Hedden and Yoon, 2006;Schinka et al., 2010). Better performance is indicated by higher scores. d. Fibromyalgia Impact Questionnaire (FIQ) was proposed by Burckhardt et al., 1991 to assess the quality of life in FM patients. We used the version adapted for use in Brazil (Paiva et al., 2013). The FIQ consists of 10 domains. The items evaluate the patient's capacity for doing everyday activities as well as their level of weariness, stiffness in the morning, mood, anxiety, and sadness. The scoring cap is 100. Higher scores indicate worse quality of life due to FM symptoms.
Psychophysical measurements, depressive symptoms, sleep quality, and serum brain-derived neurotrophic factor e. The following sequence of procedures evaluated the conditioned pain modulation test (CPM-test): First, we employed the thermo-test placed in the non-dominant forearm on the ventral forearm to define the temperature to produce a score of 6/10 (NPS, 0-10) by an average of three successive measures (T0). Second, patients submerge their dominant hand for 1 min in water at a temperature of 0-1 • C. Thirty seconds after they dipped their nondominant hand in cold water, the non-dominant forearm underwent the QST's thermo-test. The pain intensity in the thermode region was measured using a scale of 0-10 (QST + CPM-test) (T1). Third, we calculate the CPMtest score by the difference between the change in NPS 0-10 at the temperature set at 6/10 in the region of thermos-test minus 6 (reference value) (Botelho et al., 2016;Soldatelli et al., 2021 j. Dosage of BDNF serum levels: We used the blood collection tubes with gel and clot activator. After centrifuging blood samples, the serum was divided into 0.5 ml aliquots for additional examination. According to the manufacturer's instructions, sandwich ELISA was used to measure the serum levels of BDNF using monoclonal antibodies specific for the neurotrophin (R&D Systems, Minneapolis, United States). To evaluate the inter-assay variation, two plates per kit were utilized over two distinct days of the same week. The manufacturer's instructions are followed by protocols. To ascertain serum BDNF, the Enzyme-linked Immunosorbent was employed. The kit's BDNF lower detection limit is 7.8 pg/ml. Use the ChemiKine BDNF Sandwich ELISA kit, CYT306 (Chemicon/Millipore, Billerica, MA, USA), for the assay (ELISA). GloMax R -Multi Microplate Reader from Promega or the Bio-Plex R -200 device from Bio-Rad was used to assess optical density for multiplexing assay readings. Using the Bradford method, bovine serum albumin as a standard, we measured the total protein using the standard. The information was presented as pg/mg of protein.
Clinical measurements: CSS symptoms, pains score, and analgesic use k. A standardized questionnaire was used to evaluate demographic information and medical comorbidities. They self-reported diagnoses, medication use, medical procedures, and pain-related problems. l. The Numerical Pain Scale was used to measure the level of pain (NPS). The NPS scores range from zero (no pain) to maximum agony (10). Patients provided the response to the following question: How severe was your worst pain over the past week? m. The symptoms of central sensitization were evaluated using the Central Sensitization Inventory for Brazilian Population (CSI-BP). Its 25 items (total score of 0-100) examine urological symptoms, headache/jaw symptoms, mental distress, and physical problems. Higher ratings reflect more severe symptoms. Part B of the CSI-BP also evaluates neurological conditions linked to central sensitization and mental diagnoses (Caumo et al., 2017). n. If an extra analgesic medication (such as acetaminophen, ibuprofen, or tramadol) was required to treat their pain, they could do so. As rescue analgesia, they may take 500 mg of acetaminophen up to four times daily (QID). They could take Dorflex R (Sanofi Aventis, So Paulo, Brazil; 35 mg orphenadrine citrate coupled with 300 mg dipyrone and 50 mg caffeine) up to three times daily (TID) if their discomfort continued. Patients could utilize tramadol at their highest tolerated daily dose if their discomfort continued.
Statistical analysis
Continuous and categorical variables were compared using Fisher's exact test, the chi-square test, and the t-test for independent samples. We utilized the Shapiro-Wilk normality test to determine if the continuous variables displayed a normal distribution. We utilized the Mann-Whitney U-Wilcoxon test for comparisons between groups and the Wilcoxon test for comparisons within groups. Furthermore, we used a linear regression model to examine the impact of the treatment. The treatment group was factored into the models (a-tDCS or s-tDCS) and the dependent variables were evaluated by percent change in average [((value post-intervention minus value pre-intervention)/value pre-intervention) * 100]. The primary outcome was assessed by the Trail Making Test (TMT-B-A). The secondary outcomes were the following: working memory, verbal fluency (semantic and orthographic), phonemic fluency, and the impact of FM symptoms on quality of life. Since it is known that cognitive measures exhibit substantial individual variability on the same test, and they do not have a reference value to define the severity of cognitive impairment, they utilized the percent change in the average from pre-to treatment end. We restrict the intention-to-treat analysis (ITT) to subjects who had received at least 50% of the total protocol sessions, in the case of 10 sessions. We used a single imputation approach for missing data, replacing missing values with the mean for the outcome variables (Dziura et al., 2013). We used the pool of baseline standard deviation (SD) to calculate the ES using the standardized difference mean (SDM) (mean difference a-tDCS vs. s-tDCS). The ES was considered minor if it ranged (from 0.20 to 0.49), moderate if it ranged (from 0.5 to 0.79), and large if it was equal to 0.8 or over (Kazis et al., 1989). Spearman's correlation analysis was used to test the correlation between the average percent change (pre-intervention to treatment end) of TMT-B-A, QIF, and serum BDNF according to a-tDCS and s-tDCS groups. All analyses used two-tailed tests with a significance threshold of 5% and were adjusted for multiple comparisons using Bonferroni's test. SPSS was used to examine the data, version 22.0 (SPSS, Chicago, IL).
Demographic and clinical characteristics of the subjects
We screened 63 patients, and 27 patients didn't fit the criteria for inclusion. The flow presents the exclusion criteria (Figure 2). This study included 36 patients who were randomly assigned to receive either a-tDCS (n = 24) or s-tDCS (n = 12). Three patients discontinued therapy-two in the s-tDCS and one in the a-tDCS-one owing to a COVID infection that prevented her from applying the stimulation session, one because she did not feel the effects of the treatment quickly enough, and one because she did not have enough time to apply the treatment. We conducted an ITT analysis including all of them (n = 36) since all had completed at least 10 sessions of tDCS. Table 1 displays the demographic and clinical traits of the patients. There are balanced baseline features between treatment groups.
Univariate analysis: Interventions effects within groups on primary and secondary outcomes The within-group treatment effect in the outcomes: primary [sustained and divided attention assessed through the Trail Making Test (TMT-B-A)] and secondary outcomes [working memory, verbal fluency (semantic and orthographic), phonemic fluency, and impact of fibromyalgia symptoms on quality of life] are presented in Table 2. We showed the mean (standard deviation), median (interquartile 25-75) at baseline, and treatment end, as well as the effect size (ES) according to groups (s-tDCS and a-tDCS).
Primary outcome: Impact of transcranial direct current stimulation in the executive attention, divided attention, and working memory by trail making test-B-A The GLM revealed a main effect for treatment assessed through the Trail Making Test (TMT-B-A) (Wald χ 2 = 6.17; Df = 1, P = 0.013). The TMT-B-A score was adjusted by the multivariate analysis presented in Table 3 revealed that the a-tDCS reduced the total score in the Trail Making Test (TMT-B-A) to -29.53 (8.89) compared to an increase in the scores in the s-tDCS 23.09 (16.32) as shown on Figure 3. The ES based on the SDM of a-tDCS vs. s-tDCS was large [-1.48, confidence interval (CI) 95% = -2.07 to -0.90]. It is important to realize that a reduction in the scores of the TMT-B-A indicates better cognitive performance.
According to the analysis presented in Table 3, the effect of a-tDCS on TMT-B-A was positively correlated with the severity of dysfunction in the DPMS at baseline. For the study of the DPMS function, we used the NPS (0-10) as a continuous variable. Thus, higher scores indicate a lower efficiency of the DPMS. Also, the performance in the TMT-B-A was positively associated with more significant decreases in the serum BDNF from pre-intervention to treatment end. In other words, more considerable reductions in the TMT-B-A in the a-tDCS are associated with a more remarkable decrease in the BDNF at the treatment end.
Secondary outcomes: Impact of transcranial direct current stimulation on working memory (digits subtest from Wechsler Adult Intelligence Scale), cognitive flexibility (controlled oral word association test), and quality of life The GLM revealed a main effect for treatment of the a-tDCS effects compared to s-tDCS to improve the performance in the working memory, verbal fluency (semantic and orthographic), phonemic fluency, and impact of fibromyalgia symptoms on quality of life. Data are presented in Table 4.
Secondary outcomes' analysis: Trail making test, serum brain-derived neurotrophic factor and quality of life after treatment ends
In the a-tDCS, there is a moderate and positive correlation between the TMT-B-A at treatment end with changes in serum BDNF [(Rho = 0.57, confidence interval (CI) 95% = 0.28-0.76; P = 0.01]. In contrast, in s-tDCS the correlation between these two variables was not significant [(Rho = 0.25, confidence interval (CI) 95% = -0.1 to 0.55, P = 0.70]. In the a-tDCS, the TMT-B-A scores at the treatment end showed a positive and moderate correlation with the scores related to FM symptoms on quality-of-life [(Rho = 0.66, confidence interval (CI) 95% = 0.4-0.82; P = 0.001]. In contrast, in s-tDCS the correlation between these two variables was not significant [(Rho = 0.20, confidence interval (CI) 95% = -0.15 to 0.51; P = 0.60]. Such non-parametric correlations showed that patients who received a-tDCS showed a remarkable cognitive performance improvement. In the same way, they presented a more considerable reduction in serum BDNF, which was moderate and positively correlated with improved cognitive performance and improvement of symptoms that impact the quality of life.
Assessment of adverse events and safety
The adverse effects comprising headache, tingling, burning, redness, and itching were not significantly different between a-tDCS and s-tDCS (see Table 5). Both groups experienced comparable mild side effects (a-tDCS or s-tDCS). Many side effects were rated as light, and no patients discontinued therapy because of uncomfortable side effects.
To determine protocol compliance and adherence, we verified the co number of valid sessions by the software's records. Flowchart showing randomization, allocation, and progress through the study. Mean of percent changes of averages from the pre-intervention period to treatment end period of the total score in the Trail Making Test (TMT-B-A). Error bars indicate the standard error of the mean (SEM). Asterisks (*) positioned above symbols indicate significant differences (p < 0.05) between groups (a-tDCS and s-tDCS).
Discussion
This trial demonstrated that the current protocol of homebased a-tDCS compared to sham for 4 weeks of stimulation end of the BDNF. Also, the severity of dysfunction of DPMS at baseline predicted more remarkable a-tDCS effects in the cognitive impairment. Besides, we found that the reduction in the BDNF related to the a-tDCS is related to improving symptoms due to F.M. The study had a dropout rate of 10%, mainly due to restrictions on circulation in the streets instituted during the COVID-19 pandemic. Mild to moderate adverse events were more common in the active tDCS group, particularly skin tingling, burning, and itching, and the global adherence was 83.03%. This trial has key methodological differences compared to previous studies on tDCS to improve the cognitive impairment in F.M. We used a home-based tDCS device that enabled a considerably higher number of sessions. Hence, until we can be known, this is the study that evaluated the highest number of sessions used to improve cognitive performance at home. This is particularly relevant since preliminary evidence points to the increased efficacy of tDCS with more extended periods of treatment (Brunoni et al., 2016;Castillo-Saavedra et al., 2016;Brietzke et al., 2020). Additionally, the need for daily visits to clinics or hospitals has always been a significant challenge for using tDCS in the clinical context (Charvet et al., 2020;Salehinejad et al., 2021). Thus, the home-based device opens a new window of opportunity, especially for subjects with physical or cognitive disabilities that hinder their access to the clinical center. So, these results corroborate other previous studies which found that the a-tDCS on the DLPFC might activate regions associated with pain processing, such as the anterior cingulate cortex (ACC) cortex, the primary somatosensory cortex (S.I., SII), insula, and thalamus (Apkarian et al., 2005; Leknes et al., 2008;Bushnell et al., 2013). The fact that pain, as well as attention, share the same cognitive network results in a hindrance to having an efficient cognitive system. Therefore, pain may impair voluntary attentional systems and associated executive functions (Eccleston and Crombez, 1999;Bell et al., 2018). A-tDCS can alter the electrical activity of specific brain regions, encourage cortical plasticity, and enhance functional connections in the area that is being treated, improving pain modulation and quality of life. So, the a-tDCS's effect modulates neuronal membrane potential on cortical and subcortical neural networks involved in cognitive functions and pain processing. This effect corroborates the results of an earlier trial in a single session of tDCS with 2 mA, applied to the DLPFC cortex in F.M., which found improvements in the function of neural networks involved in spatial and executive attention, as well as a reduced perception of pain (Silva et al., 2017). Another trial also observed the benefit of eight tDCS sessions paired with cognitive training on working memory, verbal fluency, and immediate and delayed memory . Besides, the current findings are aligned with previous studies in patients with a depression diagnosis, which found that a-tDCS over the DLPFC reduced depressive symptoms and other symptoms linked to inappropriate emotional functioning (Brunoni et al., 2017). As well it reduced pain catastrophizing (Caumo et al., 2022) and improved cognitive functions (e.g., decision-making) (Dixon et al., 2017). According to the literature, this effect can be related to top-down control that up-regulates reactions to positive emotional stimuli (Grimm et al., 2008;Goldin et al., 2009). The ability of a-tDCS on the l-DLPFC to alleviate cognitive abnormalities, notably hypoactivity in the l-DLPFC and hyperactivity in the r-DLPFC, may be one of the potential mechanisms behind these processes and related to its impact on cognitive impairment. This hypothesis is supported in a study that assessed how inter-hemispheric connectivity conservation could have cognitive implications (Krupnik et al., 2021). So, the current result might contribute to a greater understanding of the tDCS effect on brain function. According to the CPM-test, the severity of DPMS inhibitory dysfunction predicts a remarkable a-tDCS effect compared to s-tDCS on improving cognitive function. This finding suggests that the a-tDCS impact on the outcomes has been more evident in more severe diseases. These findings demonstrate that there is an interaction between the spinebulbospinal loop and the neural network of cortical areas from an integrative approach. They support the notion that the DPMS and the brain networks involved in cognitive processing have similar neurobiological workings. According to the research, the DLPFC is, therefore, a crucial brain area for modulating the experience of pain. The benefits of using the l-DLPFC as a target area to modulate pain corroborate meta-analysis data that showed the a-tDCS on pain with a moderate E.S. (0.54) (Zortea et al., 2019). Besides, the DPLCF as a target area for improving cognitive performance finds support in data that links prefrontal cortex function with a decline in cognitive abilities (Wen et al., 2011;Wiseman et al., 2018), as well as with the impact of a-tDCS on the l-DLPFC in W.M . Thus, it is plausible that the cognitive impairment in chronic pain encompasses dysfunctions in neural networks in brain areas with a central role, either in pain (Staud and Spaeth, 2008;Bosma et al., 2016) or in cognition, such as the insula, ACC, and PFC (Nijs et al., 2021). Other studies showed the benefits of a-tDCS on l-DLPFC are supported by improvement in the W.M. and clinical and experimental pain either by repetitive transcranial magnetic stimulation (r-TMS) (Graff-Guerrero et al., 2005;Borckardt et al., 2007) or a-tDCS . This information reveals that the downstream regulating circuits, including the anterior insula, hypothalamus, periaqueductal gray substance, nucleus accumbens, and rostroventral medulla, are involved in the processes encompassing the effects of a-tDCS on the l-DLPFC (Wager et al., 2013).
Mean (SD) Median (IQR 25−75 ) ES P Mean (SD) Median (IQR 25−75 ) ES P
The effect of repetitive sessions of a-tDCS has been attributed to the induction of use-dependent neuroplasticity, which is related to "synaptic learning" and long-term changes, which resemble glutamatergic synapses' long-term potentiation (LTP) or long-term depression (LTD) Paulus, 2000, 2001). The activity level of underlying neuronal populations at stimulation time is a potentially important mediator of the effect of tDCS on brain function. This is further corroborated by the fact that the impact of tDCS to improve cognitive performance is positively correlated with the neuroplasticity state, according to the percent change in serum BDNF from pre-to treatment end. This discovery aids in understanding how a-tDCS affects faulty neuroplasticity since it can alter mechanisms that include strengthening glutamatergic synapses while weakening GABAergic synapses (Coull et al., 2005). The relationship of serum BDNF to predict the a-tDCS was found in our previous studies with F.M. with a-tDCS applied to the DLPFC in work memory . In a study with a similar montage, the baseline BDNF predicted the tDCS effect on daily pain scores after sixty sessions of tDCS self-applied at home (Brietzke et al., 2020). Besides, in the postoperative recovery of the hallux valgus surgery, the liquor BDNF after two a-tDCS sessions was associated with lower pain scores and disability due to pain 7 days after surgery (Ribeiro et al., 2017). According to earlier studies, it has been found that there is higher serum BDNF in FM compared to other chronic pain and healthy subjects (Deitos et al., 2015;Stefani et al., 2019). Therefore, the decrease in serum BDNF in the a-tDCS group compared to the sham group and the improvement in cognitive function suggest that the intervention counter-regulated the FM-related dysfunctional neuroplasticity. Despite the relevance of this finding to indicate how much the effect of this therapy might help to improve maladaptive neuroplasticity, this result should interpret sparingly because it is an indirect measure of the neuroplasticity phenomenon.
Our findings should be viewed considering some limitations. First, although patients received comprehensive training in using the device, no remote monitoring of sessions was performed. Therefore, there should be caution in direct comparison with studies with supervised electrode placement and exposure supervision. Second, we included only females to remove the potential bias due to sex since it has been found that in women, the a-tDCS over DLPFC produces a higher current flow to the frontal regions (Russell et al., 2017) and better performance in cognitive tasks than in men (Martin et al., 2017). Third, our findings are consistent with past research that supported this method of self-application for prolonged tDCS use at home. We also see similar outcomes to studies in which the therapy was administered under close observation . Fourth, the tDCS system used in the current study provides an effective technical solution that enables medical engineers who were not involved in the patients' care to program the tDCS system by the randomization sequence to ensure that all members of the research team and patients are blinded. Fifth, in this study, high adherence was observed by the records of devices in use, like those obtained in real-life environments. Sixth, despite the randomization processes permit to have balanced groups of a-tDCS and s-tDCS on cognitive performance may be confounded by other variables, such as psychiatric comorbidities (i.e., depression, anxiety, and sleep disturbance) (Austin et al., 2001;Airaksinen et al., 2005;Castaneda et al., 2008), or medication use, particularly opioids, which may lead to cognitive side effects that cannot be controlled entirely (Ersek et al., 2004). Seventh, there is no standard battery of neuropsychological tests for cognitive function assessment in chronic pain. So, the literature has recommended that the cognitive assessment in FM. should include tests to evaluate attention and W.M., complex psychomotor speed, and executive functioning (Kravitz and Katz, 2015). Eight, the allocation sequence was developed following a standard format described in the scientific literature. Table 1 reveals that most baseline variables are balanced across groups, indicating that randomization equilibrated the groups (a-tDCS and s-tTDC). Although the CSS baseline disparity between a-tDCS and s-tDCS may be attributable to random chance, it is not possible to rule out the effect of regression on the mean. That is, a higher score on the outcome initial might tend to be lower upon subsequent measurement. Nine, we decide by allocation 2:1 based on the rationale that fibromyalgia has important suffering. Based on the argument that fibromyalgia causes significant long-term discomfort, we allocate 2:1 since, if a lower number of participants in the sham group, we may treat more individuals actively, leading to increased adherence. Additionally, the higher sample size in the active group increases the ability to identify side effects (Dumville et al., 2006;Hey and Kimmelman, 2014). Finally, with an adherence rate of more than 85% to sessions in both a-tDCS and s-sham tDCS, we adopted a strict and reproducible technique to demonstrate the efficacy and viability of t-DCS at home. However, further studies must explore if neurophysiological measures, such as EEG records, might help to shed light on the specific modulated cognitive processes by the intervention. Another aspect is to allow a more focal target area of the stimulation using multichannel tDCS montages.
These findings revealed that daily treatment with a homebased tDCS device over l-DLPFC compared to sham stimulation over 4 weeks improved the cognitive impairment in F.M. The a-tDCS at home was well-tolerated, underlining its potential as an alternative treatment for cognitive dysfunction. Besides, the a-tDCS effect is related to the severity of DPMS dysfunction and changes in neuroplasticity state.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The Research Ethics Committee approved the protocol for this trial at the Hospital de Clínicas de Porto Alegre (HCPA), Brazil. The patients/participants provided their written informed consent to participate in this study.
Author contributions
WC, RA, PVS, and FF had substantial contributions to the conception and design of the work. CA, LR, PRS, DS, IL, FF, and WC drafted the work and revised it critically for important intellectual content. WC and FF agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work that were appropriately investigated and resolved. All authors agreed and approved the final version of this work. | 2022-11-24T14:43:52.379Z | 2022-11-24T00:00:00.000 | {
"year": 2022,
"sha1": "00e848423950de42a9cf5ef72a723de4958e0cdd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "00e848423950de42a9cf5ef72a723de4958e0cdd",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
234095353 | pes2o/s2orc | v3-fos-license | A Review on Existing Tetracyclines Analogues and Their Pharmacologically Targeted SAR
Authors’ Contributions 1 Conception & Study design, Data Collection & Processing, Data Analysis and /or interpretation, Drafting of Manuscript, Critical Review. 2 Data Collection & Processing, Drafting of Manuscript, Critical Review. 3 Data Analysis and /or interpretation, Critical Review. 4 Conception & Study design, Data Collection & Processing. 5 Data Analysis and /or interpretation, Critical Review. 6 Data Collection & Processing, Data Analysis and /or interpretation. 7 Data Collection & Processing, Critical Review. 8 Data Analysis and /or interpretation. Acknowledgement The authors are thankful to University of Central Punjab, Superior College Lahore and Riphah Institute of Pharmaceutical Sciences and Punjab University, Pakistan for providing literature review facility to carry out the work. Article info. Received: August 18, 2020 Accepted: January 13, 2021 Funding Source: Nil Conflict of Interest: Nil Cite this article: Tanveer S, Masood A, Ashiq K, Qayyum M, Bajwa MA, Rukh AS, Arshad A, Sattar R. A Review on Existing Tetracyclines Analogues and Their Pharmacologically Targeted SAR. RADS J Pharm Pharm Sci. 2020; 8(3):173-180. *Address of Correspondence Author: Samreen_tanveer@yahoo.com Background: Tetracyclines belong to a class of broad spectrum antibiotics. Around the globe, they are prescribed to treat various gram negative and gram positive bacterial infections. Once in the cell, they reversibly bind to the receptors which are located on 30S subunit of bacterial ribosome. They act by averting the protein synthesis, in turn, halting the bacterial growth. Aim and Objectives: The aim of current review is to study tetracyclines, identifying potential activity against infections and highlighting the microbial resistance associated with various analogues. Material and Method: The data for this review is collected from various databases including Scopus, PubMed, Springer Link and Google Scholar. To ensure the credibility only indexed articles were used in current study. Result: The outcome of the study has suggested that tetracyclines and number of its analogues show selective bioactivity and strength to the biological targets. Through modification at certain positions, activity of drug is changed substantially. This not only affects therapeutic activity and safety profile but also has influence the bacterial resistance. Conclusion: As antibiotic resistance amongst bacteria is emerging tremendously, it demands more research. It is still needed to synthesize the novel analogues that would be helpful to cure infections caused by the resistant bacteria. Further these analogues can be tagged with radioisotopes that would be helpful for diagnosis and treatment of infectious diseases.
I N T R O D U CTION
Tetracyclines are broad spectrum antibiotics which inhibit the microbial protein synthesis by interfering aminoacyl tRNA and acceptor sites of ribosomes [1].
It exerts its action by binding to 30s ribosomal RNA [2]. It is effective against gram positive and gram negative micro-organisms. Tetracyclines are used widely due to their higher safety profile. It is also used prophylactically against Plasmodium falciparumin malaria. It is also used against micro-organisms that are resistant to other antibiotics [3]. Tetracyclines are linearly fused 6 membered, four carbocyclic ring systems as shown in Figure 1. Among ring C and ring D, one must be aromatic. Unsaturation at positions 2-3 and 11-12 are essential for activity. Presence of keto-enol system at position 1-3 and 11-12 is necessary for activity. Other important structural features in tetracyclines are amino acyl group at position 2, tertiary amine at position 4, diethyl group at position 5 and position 6 having hydroxyl and methyl group [4].
Structure Activity Relationship
Amide functional group at position 2 should remain unsubstituted for activity, if substitutions are necessary then one hydrogen can be replaced with alkyl amino methyl group as in rolitetracycline [5,6]. Presence of tertiary amines at position 4 is importantto keepketo-enol system of ring Aintact.Position 4 tertiary amine can bear substituents such as hydrazine, hydroxyl or oxime. Epimerization occur at 5a position [7]. Electrophilic substitution can occur at position 7 & 9 of ring D with nitro group or halogens; halogens probably used mostly because they are less carcinogenic for host [8].
With respect to discovery and development of tetracyclines, chlortetracycline and oxytetracyclines were firstly originated from Streptomyces aureofaciens and Streptococcus rimosus in 1940s [9,10]. This discovery was followed by synthesis of many semi-synthetic tetracyclines such as minocyclines, methacyclins and doxycyclines [11].
Tetracyclines were first discovered by Dr. Benjamin Dugger of Lederle Laboratories in the mid 1940s as the fermentation product of an unusual goldencolored soil bacterium named asStreptomyces aureofacians [12]. These tetracyclins and their analogues have wide range of activity against microbes. Tigecyclinewas found to haveantibacterial activity [13]. Omadacycline was the first intravenous and orally effective 9-aminomethylcycline in clinical development for use against multiple infectiousdiseases including acute bacterial skin and skin structure infections (ABSSSI), communityacquired bacterial pneumonia(CABP), and urinary tract infections (UTI). The comparative in vitro activity of omadacycline was determined against a wide range of Gram-positive clinical isolates, including methicillin-resistant Staphylococcus aureus (MRSA) [14], vancomycin-resistantEnterococcus (VRE), Lancefield groups A and B beta-hemolytic streptococci, penicillin-resistant Streptococcus pneumonia (PRSP), and Haemophilus influenzae (H. influenzae). The omadacycline MIC90s for MRSA, VRE, and beta-hemolytic streptococci [15].
Cl group at position 7 Oxytetracycline [17] Primary target is 30s / tRNA ribosome and exhibit antibacterial activity.
Doxurubicin [19] Exerts its anticancer effect by apoptosis and oxidative Stress mechanism.
OH group at 5 th position and deoxylation at 6 th position Lymecycline [21] Antifungal (oxidative stress) Lonophore and Chelating Mechanism Substitution at 2 position.
Medicinal Importance of Tetracyclines
Tetracyclines are broad spectrum antibiotics because its activity is being evaluated against wide array of bacterial infections [32]. Tetracyclines have been used immensely in the prophylaxis and treatment of bacterial infections as they are inexpensive and broad spectrum antimicrobials [33]. Tetracyclines are predominately a low-cost alternative among other antibiotics. Interestingly, certain type of tetracycline has recently been used inprevention of cancer recurrence by inhibiting such enzymes and processes that usually stimulate growth of cancerous cells [34,6]. These drugs may show potential for long-term management of some types of cancers [11,35].
Radioprotective Activity
Kwanghee and coworkers in 2009 conducted a research to recognize medicinal agents that shield body tissues from detrimental effects of radiation therapy. They tested radioprotective activity of tetracyclines and fluoroquinolones in murine lymphocyte rat model which were subjected to total body irradiation. Results manifested that tetracyclines and fluoroquinolones exhibited marked radioprotective activity owing to their planar ring structure. Tetracyclines also averted injurious affects of radiations on human lymphoid cells by preventing DNA strand breakdown. These findings proved that tetracyclines have tremendous potential in reducing radiotherapy damage on normal tissues [36].
Tumor Detection
Radio isotopes of tetracyclines has been developed and used in localized tumor detection.Tetracycline radioisotope 99mTc has been successfully employed in external scanning of tumor lesions in rabbits, mice, rats and humans [37].
Anticancer Activity
Leezenberg and Wesseling in 1979 carried out a retrospective research on 218 cancer patients. These patients were stricken by nasopharyngeal cancer. This study was aimed to evaluate effects of tetracyclines therapy on life span of patients. Results revealed that patients who received tetracyclines not only lived longer but tetracyclines also improved the detrimental effects of methotrexate. It is believed tetracyclines exert this action owing to inhibition of mitochondrial protein synthesis [38].
A study revealed that tetracyclines regulated gene delivery system along with radiation therapy employed in prostate cancerous rat model, developed tumor immunity in cancerous rats and augmented immune response [39].
Prevention of Corneal Ulceration
Tetracyclines are used as prophylactic treatment for corneal ulceration after severe optical damage. They exert their action by inhibiting protein degradation through its suppressive action on neutrophil collagenase, alpha 1 antitrypsin degradation and through its anti-oxidant activity [40].
Antimicrobial Activity
Analogues of tetracyclines also show promising antimicrobial activity. 9-substituted analogues of tetracyclines were synthesized by reaction of organotin reagent with salt of C-9 diazonium tetrafluoroborate tetracyclines. These analogues show significant activity against other antibiotic resistant infections [41].
Tetracycline is used in variety of bacterial infections of different body organs such as respiratory pathway, urinary pathway, intestine, reproductive organs, lymph nodes, and skin etc [42]. Many sexually transmitted diseases (STDs) including syphilis, gonorrhea, or chlamydia and also acute acne are treated by these analogues [43].
Treatment for Acne
The growth suppression of an anaerobic organism,Cutibacterium acnes, demonstrated by the tetracyclines makes this class of drug important for the treatment of moderate and severe acne. Moreover, the anti-inflammatory effect of tetracyclines is an added advantage for the acne lesions [43,45,46].
Veterinary Use
Several analogues of tetracyclines including minocycline, methacycline and doxycycline were considered harmful for veterinary use. It was found out that minocycline and doxycycline were rather effective in treatment of animal diseases. These tetracyclines have high lipid solubility that explain its better pharmacokinetic profile that is improved absorption and distribution which may results in efficient antimicrobial activity. Doxycycline excretes through intestine; it is useful in renal impairment situations.Doxycycline is used in intestinal and respiratory tract infections in poultry. Minocycline is used in combination with streptomycin in treatment of canine brucellosis [47].
Miscellaneous Uses
Tetracyclines are useful in treatment of number of diseases such as relapsing fever, syphilis, pneumonia, throat irritation; bacterial urinary tract infection, anthrax, Rocky mountain spotted fever, sinus irritation and congestion, chronic slow progressing ulcerative granulomatous disease [48]. The infections induced by direct contact with the infected animals and adulterated edibles are also treated with antibiotics. Tetracycline can be served as a substitute for penicillin or other antibiotics in cases of severe infections like Anthrax, Listeria, Clostridium, Actinomyces, and others [49]. Tetracyclines are used in treatment of bones and also used for calcification of cartilage [12,50].
Precautions
The intake of milk, dairy products that contain calcium, iron, antacids, or aluminum salts should be avoided at least 2 hours before or 6 hours after using antacids when using this therapy [51]. Dose of tetracyclines should be taken with water and one hour before or two hours after meals [52,53].
C O N C L U S I O N
Tetracyclines belong to a class of broad spectrum antibiotics. Worldwide, they are recommended to cure various gram negative and gram positive bacterial infections. They exert their action by reversibly binding to 30S subunit of bacterial ribosome.Tetracyclines analogues are commonly used because of their availability and cost effectiveness especially in developing countries. Thestructure-activity-relationship (SAR) studies of tetracyclines shows the selective bioactivity and strength to the biological targets which makes this class of medicinal compounds able to label with radioisotopes and providing outstanding results in detection and treatment of localized tumors. Furthermore, advanced methods of therapy has been introduced against infectious lesions includes radiotherapy by using the radioisotopes of tetracyclines. In time to come, more radiolabeled tetracyclines analogues can be derivatized for diagnosis and treatment of infectious diseases. | 2021-04-28T15:36:03.772Z | 2021-01-29T00:00:00.000 | {
"year": 2021,
"sha1": "35eafd3d9f2b442d74e483abada2d5832dfb2087",
"oa_license": "CCBY",
"oa_url": "http://jpps.juw.edu.pk/index.php/jpps/article/download/426/259",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4408124d73a6c0b1bb8a77430e41e8a561df5da8",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2508366 | pes2o/s2orc | v3-fos-license | AND/OR Multi-Valued Decision Diagrams (AOMDDs) for Weighted Graphical Models
Compiling graphical models has recently been under intense investigation, especially for probabilistic modeling and processing. We present here a novel data structure for compiling weighted graphical models (in particular, probabilistic models), called AND/OR Multi-Valued Decision Diagram (AOMDD). This is a generalization of our previous work on constraint networks, to weighted models. The AOMDD is based on the frameworks of AND/OR search spaces for graphical models, and Ordered Binary Decision Diagrams (OBDD). The AOMDD is a canonical representation of a graphical model, and its size and compilation time are bounded exponentially by the treewidth of the graph, rather than pathwidth as is known for OBDDs. We discuss a Variable Elimination schedule for compilation, and present the general APPLY algorithm that combines two weighted AOMDDs, and also present a search based method for compilation method. The preliminary experimental evaluation is quite encouraging, showing the potential of the AOMDD data structure.
Introduction
We present here an extension of AND/OR Multi-Valued Decision Diagrams (AOMDDs) [13] to general weighted graphical models, including Bayesian networks, influence diagrams and Markov random fields.
The work on AOMDDs is based on two existing frameworks: (1) AND/OR search spaces for graphical models and (2) decision diagrams (DD). AND/OR search spaces [9] have proven to be a unifying framework for various classes of search algorithms for graphical models. The main characteristic is the exploitation of independencies between variables during search, which can provide exponential speedups over traditional search methods that can be viewed as traversing an OR structure. The AND nodes capture problem decomposition into independent subproblems, and the OR nodes represent branching according to variable values.
Decision diagrams are widely used in many areas of research, especially in software and hardware verification [5]. A BDD represents a Boolean function by a directed acyclic graph with two sink nodes (labeled 0 and 1), and every internal node is labeled with a variable and has exactly two children: low for 0 and high for 1. A BDD is ordered if variables are encountered in the same order along every path. A BDD is reduced if all isomorphic nodes (i.e., with the same label and identical children) are merged, and all redundant nodes (i.e., whose low and high children are identical) are eliminated. The result is the celebrated reduced ordered binary decision diagram, or OBDD [3].
AOMDDs combine the two ideas, in order to create a decision diagram that has an AND/OR structure, thus exploiting problem decomposition. As a detail, the number of values is also increased from two to any constant, but this is less significant for the algorithms.
A decision diagram offers a compilation of a problem. It typically requires an extended offline effort in order to be able to support polynomial (in its size) or constant time online queries. The benefit of moving from OR structure to AND/OR is in a lower complexity of the algorithms and size of the compiled structure. It typically moves from being bounded exponentially in pathwidth pw * , which is characteristic to chain decompositions or linear structures, to being exponentially bounded in treewidth w * , which is characteristic of tree structures (it always holds that w * ≤ pw * and pw * ≤ w * · log n).
Our contributions in this paper are as follows. (1) We formally describe the extension of AND/OR multi-valued decision diagram (AOMDD) to weighted graphical models. (2) We describe the extension to weighted models of the APPLY operator that combines two AOMDDs by an operation. The output of APPLY is still bounded by the product of the sizes of the inputs. (3) We present two compilation The other is based on search. Both schemes are exponential in the treewidth of the model. (4) We provide encouraging preliminary experimental evaluation of the search based compilation method. (5) We discuss how AOMDDs relate to various earlier and recent works, providing a unifying perspective for all these methods.
The structure of the paper is as follows: Section 2 provides preliminaries. Section 3 gives an overview of AND/OR search space. Section 4 describes the AOMDD for constraint networks, the Variable Elimination schedule for compilation and the APPLY operator, and a search based compilation scheme. Section 5 contains the main contribution: the extension of AOMDDs to weighted models, and a discussion of their canonical form and the extensions of the compilation schedule and APPLY operator. Section 6 provides experimental evaluation and section 7 concludes.
Preliminaries
In this section we describe graphical models, Binary Decision Diagrams (OBDDs) and Variable Elimination. . . , f r } is a set of discrete realvalued functions, each defined over a subset of variables S i ⊆ X, called its scope, and sometimes denoted by 1 . The graphical model represents the combination of all its functions: ⊗ r i=1 f i . A reasoning task is 1 The combination operator can be defined axiomatically [17].
based on a projection (elimination) operator, ⇓, and is de- Examples of graphical models include Bayesian networks, constraint networks, influence diagrams, Markov networks.
Two graphical models are equivalent if they represent the same set of solutions. Namely, if they have the same universal model.
DEFINITION 3 (primal graph)
The primal graph of a graphical model is an undirected graph that has variables as its vertices and an edge connects any two variables that appear in the scope of the same function.
A pseudo tree resembles the tree rearrangements [11]: DEFINITION 4 (pseudo tree) A pseudo tree of a graph G = (X, E) is a rooted tree T having the same set of nodes X, such that every arc in E is a back-arc in T (i.e., it connects nodes on the same path from root).
DEFINITION 5 (induced graph, induced width, treewidth, pathwidth) An ordered graph is a pair (G, d), where G is an undirected graph, and d = (X 1 , ..., X n ) is an ordering of the nodes. The width of a node in an ordered graph is the number of neighbors that precede it in the ordering. The width of an ordering d, denoted by w(d), is the maximum width over all nodes. The induced width of an ordered graph, w * (d), is the width of the induced ordered graph obtained as follows: for each node, from last to first in d, its preceding neighbors are connected in a clique. The induced width of a graph, w * , is the minimal induced width over all orderings. The induced width is also equal to the treewidth of a graph. The pathwidth pw * of a graph is the treewidth over the restricted class of orderings that correspond to chain decompositions.
Binary Decision Diagrams
Decision diagrams are widely used in many areas of research to represent decision processes. In particular, they can be used to represent functions. Due to the fundamental importance of Boolean functions, a lot of effort has been dedicated to the study of Binary Decision Diagrams (BDDs), which are extensively used in software and hardware verification [5,15,12,3].
A BDD is a representation of a Boolean function. Given B = {0, 1}, a Boolean function f : B n → B, has n arguments, X 1 , · · · , X n , which are Boolean variables, and takes Boolean values. A Boolean function can be represented by a table (see Figure 1(a)), but this is exponential in n, and so is the binary tree representation in Figure 1(b). The goal is to have a compact representation, that also supports efficient operations between functions. OBDDs [3] provide such a framework by imposing the same order to the variables along each path in the binary tree, and then applying the following two reduction rules exhaustively: (1) isomorphism: merge nodes that have the same label and the same respective children (see Figure 1(c)). (2) redundancy: eliminate nodes whose low (zero) and high (one) edges point to the same node, and connect the parent of removed node directly to the child of removed node (see Figure 1(d)). The resulting OBDD is shown in Figure 1(e).
Variable Elimination (VE)
Variable elimination (VE) [2,8] is a well known algorithm for inference in graphical models. Consider a graphical model R = X, D, F and an elimination ordering d = (X 1 , X 2 , . . . , X n ) (X n is eliminated first, X 1 last). Each function placed in the bucket of its latest variable in d. Buckets are processed from X n to X 1 by eliminating the bucket variable (the functions residing in the bucket are combined together, and the bucket variable is projected out) and placing the resulting function (also called message) in the bucket of its latest variable in d. Figure 2(a) shows a graphical model and 2(b) the execution of VE.
VE execution defines a bucket tree, by linking the bucket of each X i to the destination bucket of its message (called the parent bucket). A node in the bucket tree has a bucket variable, a collection of functions, and a scope (the union of the scopes of its functions). If the nodes of the bucket tree are replaced by their respective bucket variables, we obtain a pseudo tree (see Figure 2(c) and 3(b)).
AND/OR Search Space
The AND/OR search space [9] is a recently introduced unifying framework for advanced algorithmic schemes for graphical models. Its main virtue consists in exploiting independencies between variables during search, which can provide exponential speedups over traditional search methods oblivious to problem structure.
AND/OR Search Trees
Given a graphical model M = X, D, F , its primal graph G and a pseudo tree T of G, the associated AND/OR search tree, S T (R), has alternating levels of OR and AND nodes. The OR nodes are labeled X i and correspond to the variables. The AND nodes are labeled X i , x i and correspond to the value assignments in the domains of the variables. The structure of the AND/OR search tree is based on the underlying pseudo tree T . The root of the AND/OR search tree is an OR node labeled with the root of T . The children of an OR node X i are AND nodes labeled with assignments X i , x i . that are consistent with the assignments along the path from the root. The children of an AND node X i , x i are OR nodes labeled with the children of variable X i in the pseudo tree T .
The AND/OR search tree can be traversed by a depth first search algorithm, thus using linear space. It was already shown [11,1,6,9] that: THEOREM 1 Given a graphical model M and a pseudo tree T of depth m, the size of the AND/OR search tree based on T is O(n k m ), where k bounds the domains of variables. A graphical model having treewidth w * has a pseudo tree of depth at most w * log n, therefore it has an AND/OR search tree of size O(n k w * log n ).
AND/OR Search Graphs
The AND/OR search tree may contain nodes that root identical conditioned subproblems. These nodes are said to be unifiable. When unifiable nodes are merged, the search space becomes a graph. Its size becomes smaller at the expense of using additional memory by the search algorithm. The depth first search algorithm can therefore be modified to cache previously computed results, and retrieve We can define graph based contexts for both OR nodes and AND nodes, just by expressing the set of ancestor variables in T that completely determine a conditioned subproblem. However, it can be shown that using caching based on OR contexts makes caching based on AND contexts redundant, so we will only use OR caching.
Given a pseudo tree T of an AND/OR search space, the context of an OR node X, denoted by context(X) = [X 1 . . . X p ], is the set of ancestors of X in T ordered descendingly, that are connected in the primal graph to X or to descendants of X.
It is easy to verify that the context of X separates the subproblem below X from the rest of the network. The context minimal AND/OR graph is obtained by merging all the context unifiable OR nodes. It was shown that [1,9]: . . , f 4 , which are assumed to be strictly positive (i.e., every assignment is valid). Figure 3(b) shows a pseudo tree for the graph. The dotted lines are edges in the primal graph, and back-arcs in the pseudo-tree. The OR context of each node is shown in square brackets. Figure 3(c) shows the AND/OR search tree and 3(d) shows the context minimal AND/OR graph.
Weighted AND/OR Search Graphs
In some cases (e.g. constraint networks), the functions of the graphical model take binary values (0 and 1, or true and false). In this case, an AND/OR search graph expresses the consistency (valid or not) of each assignment, and can associate this value with its leaves.
In more general cases, which are the focus of this paper, the functions of the graphical model take (positive) real values, called weights. For example, in Bayesian networks the weights express the conditional probability. In the more general case of weighted models, it is useful to associate weights to the internal OR-AND arcs in the AND/OR graph, to maintain the global function decomposition and facilitate the merging of nodes. DEFINITION 6 (buckets relative to a backbone tree) Given a graphical model R = X, D, F, ⊗ and a backbone tree T , the bucket of X i relative to T , denoted by B T (X i ), is the set of functions whose scopes contain X i and are included in path T (X i ), which is the set of variables from the root to X i in T . Namely, Formally, Figure 4 shows a belief network, a DFS tree that drives its weighted AND/OR search tree, and a portion of the AND/OR search tree with the appropriate weights on the arcs expressed symbolically. In this case the bucket of E contains the function P (E|A, B), and the bucket of C contains two functions, P (C|A) and P (D|B, C). Note that P (D|B, C) belongs neither to the bucket of B nor to the bucket of D, but it is contained in the bucket of C, which is the last variable in its scope to be instantiated in a path from the root of the pseudo tree.
AND/OR Multi-Valued Decision Diagram for Constraint Networks
Constraint networks have only binary valued functions. In [13] we presented a compilation scheme for AOMDDs for constraint networks based on the Variable Elimination schedule. For completeness, we only provide below the main ideas for constraint networks, and then present the current contribution extending the AOMDD for weighted graphical models.
The context minimal graph is a data structure that is equivalent to the given graphical model, in the sense that it represents the same set of solutions, and any query on the graphical model can be answered by inspecting the context minimal graph. Our goal is to shrink the context minimal graph even further, by identifying mergeable nodes beyond those based on context. Redundant nodes can also be identified and removed.
Suppose we are given an AND/OR search graph (it could also be a tree initially). The reduction rules of OBDDs are also applicable to it, if we maintain the semantics. In particular, we have to detail the treatment of AND nodes and OR nodes. If we consider only reduction by isomorphism, then the AND/OR graph can be processed by ignoring the AND or OR attributes of the nodes. If we consider reduction by redundancy, then it is useful to group each OR node together with its AND children into a meta-node.
DEFINITION 8 (meta-node)
A nonterminal meta-node v in an AND/OR search graph consists of an OR node labeled var(v) = X i and its k i AND children labeled X i , x i1 , . . . , X i , x i k i that correspond to its value assignments. We will sometimes abbreviate X i , x ij , by x ij . Each AND node labeled x ij points to a list of child meta-nodes, u.children j .
Consider the pseudo tree in Figure 3(b). An example of meta-node corresponding to variable A is given in Figure 5, assuming three values. That is just a portion of an AND/OR graph, where redundant meta-nodes were removed. For A = 0, the child meta-node has variable B. For A = 1, B is irrelevant so the corresponding meta-node was removed, and there is an AND arc pointing to E and C. For A = 2, both B and C are irrelevant. This example did not take into account possible weights on the OR-AND arcs. Consider the ordering d = (A, B, C, D, E, F, G, H). The pseudo tree induced by d is given in Fig. 6(a). Figure 6(b) shows the execution of VE with AOMDDs along ordering d. Initially, the constraints C 1 through C 9 are represented as AOMDDs and placed in the bucket of their latest variable in d. Each original constraint is represented by an AOMDD based on a chain. For bi-valued variables, they are OBDDs, for multiple-valued they are MDDs (multivalued decision diagrams). Note that we depict metanodes: one OR node and its two AND children, that appear inside each larger square node. The dotted edge corresponds to the 0 value (the low edge in OBDDs), the solid edge to the 1 value (the high edge). We have some redundancy in our notation, keeping both AND value nodes and arc types (doted arcs from "0" and solid arcs from "1").
Compiling AOMDDs by Variable Elimination
The VE scheduling is used to process the buckets in reverse order of d. A bucket is processed by joining all the AOMDDs inside it, using the APPLY operator (described further). However, the step of eliminating the bucket variable will be omitted because we want to generate the full AOMDD. In our example, the messages m 1 = C 1 C 2 and m 2 = C 3 C 4 are still based on chains, so they are still OBDDs. Note that they still contain the variables H and G, which have not been eliminated. However, the message m 3 = C 5 m 1 m 2 is not an OBDD anymore. We can see that it follows the structure of the pseudo tree, where F has two children, G and H. Some of the nodes corresponding to F have two outgoing edges for value 1.
The processing continues in the same manner The final output of the algorithm, which coincides with m 7 , is shown in Figure 6(c). The OBDD based on the same ordering d is shown in Fig. 6(d). Notice that the AOMDD has 18 nonterminal nodes and 47 edges, while the OBDD has 27 nonterminal nodes and 54 edges.
We present the APPLY algorithm for combining AOMDDs for constraints. It was shown in [13] that the complexity of the APPLY is at most quadratic in the input.
In [13] it was shown that the time and space complexity of the VE based compilation scheme is exponential in the treewidth of the model.
Compiling AOMDDs by AND/OR Search
We describe here a search based approach for compiling an AOMDD. Theorem 2 ensures that the context minimal (CM) graph can be traversed by AND/OR search in time and space O(n k w * T (G) ). When full caching is used, the trace of AND/OR search (i.e., the AND/OR graph traversed by the algorithm) is a subset of the CM graph (if pruning techniques are used, some portions of the CM graph may not be traversed). When the AND/OR search algorithm terminates, its trace is an AND/OR graph that expresses the original graphical model. We can therefore apply the reduction rules (isomorphism and redundancy) to the trace of the AND/OR search in a single bottom up pass, that has complexity linear in the size of the trace. In fact, the reduction rules can be included in the depth first AND/OR search algorithm itself: whenever the entire subgraph of a meta-node has been visited, the algorithm can check for isomorphism between the current node and metanodes of the same variable, and also check redundancy, before the search retracts to the parent meta-node. The end result will be the AOMDD of the original graphical model. The time and space complexity of this scheme is bounded in the worst case by that of exploring the CM graph, which is given in Theorem 2 (i.e., exponential in the treewidth of the model). In Section 6 we provide preliminary evaluation of the search based compilation.
AND/OR Multi-Valued Decision Diagram for Weighted Graphs
We will now describe an extension of AOMDDs to weighted graphical models, which include probabilistic graphical models. The functions defining the model can in this case take arbitrary positive real values. The AND/OR search space is well defined for such graphical models, and in particular the context minimal graph is a decision diagram that represents the same function as the model. The reduction rules (merge isomorphic nodes and reduce redundant nodes) are also well defined for weighted models (if we operate with meta-nodes), and guaranteed to produce equivalent decision diagrams. For example, isomorphic nodes should have the same variable, the same sets of children, and the same weights on their respective OR-AND arcs. If we start with the AND/OR tree and apply the isomorphism rule exhaustively, we are guaranteed to obtain a graph at least as compact as the context minimal graph. This is because OR nodes that have the same context also represent isomorphic meta-nodes when the isomorphic rule was applied exhaustively to all the levels below.
However, the property of being a canonical representation of a function is lost in the case of weighted graphs, if we only use the usual reduction rules. Figure 7(c) shows the context minimal graph, which has a compact representation of each subtree, but does not share any of their parts. In these figures we do not show the contours of meta-nodes, to reduce clutter.
What we would like in this case is to have a method of recognizing that the left and right subtrees corresponding to M = 0 and M = 1 represent the same function. We do this by normalizing the weights in each level, and processing bottom up by promoting the normalization constant.
In Figure 8(a) the weights on the OR-AND arcs of level C have been normalized, and the normalization constant was promoted up to the OR node value. In Figure 8(b) the normalization constants are promoted upwards again by multiplication into the OR-AND weights. This process does not change the value of each full assignment, and therefore produces equivalent graphs. We can see now that some of the C level (meta) nodes are mergeable. Continuing this process gives the final AOMDD for the weighted model, in Figure 8(c).
DEFINITION 9 (weighted AOMDD) A weighted AOMDD is an AND/OR graph (with meta-nodes)
, where for each OR node, the emanating OR-AND arcs have an associated weight, such that their sum is 1, and the root meta-node has a weight (the resulting normalization constant). The terminal nodes are just 0 and 1.
The following theorem ensures the that the weighted AOMDD is a canonical representation. The proof is omitted here for space reasons. We only mention that the proof is by structural induction bottom up over the layers of the AND/OR graph.
The APPLY algorithm needs minimal modifications now to operate on weighted AOMDDs. The hash function H 2 , which hashes meta-nodes, has to take as extra arguments the weights of the meta-node. Similarly, when checking redundancy in line 21, the weights should also be equal for the node to be redundant, and their common value has to be promoted by multiplication. When checking isomorphism in line 23, the corresponding weights are checked via the hash function H 2 . The same VE schedule can now be used to compile an AOMDD for a weighted graphical model.
Experimental Evaluation
Our experimental evaluation is in preliminary stages, but the results we have are already encouraging. We ran the search based compile algorithm, by recording the trace of the AND/OR search, and then reducing the resulting AND/OR graph bottom up. In these results we only applied the reduction by isomorphism and still kept the redundant meta-nodes. Table 1 shows the results for 20 belief networks from 5 problem classes: medical diagnosis (CPCS), digital circuits (ISCAS), deterministic grid networks (GRID), genetic linkage analysis (LINKAGE) as well as relational belief networks (PRIMULA). For each network we chose randomly e variables and set their values as evidence. For each query we recorded the compilation time in seconds, the number of OR nodes in the context minimal graph explored (#cm) and the size of the resulting AOMDD (#aomdd).
In addition, we also computed the compression ratio of the AOMDD structure as ratio = #cm/#aomdd. We also report the number of variables (n), domain size (d), induced width (w * ), pseudo tree depth (h), as well as the percentage of zero probability tuples (zeros (%)) for each test instance.
We see that in a few cases the compression ratio is significant (e.g., cpcs422b 16.67%, s386 7.64%). Our future work will include the reduction rule by redundancy, as well as the compilation algorithm by Variable Elimination schedule.
Conclusion and Discussion
We presented the new data structure of weighted AOMDD, as a target for compilation of weighted graphical models. It is based on AND/OR search spaces and Binary Decision Diagrams. We argue that the AOMDD has an intuitive structure, and can easily be incorporated into other already existing algorithm (e.g., join tree clustering). We provide two compilation methods, one based on Variable Elimination and the other based on search, both being time and space exponential in the treewidth of the graphical model. The preliminary experimental evaluation is quite encouraging, and shows the potential of the new AOMDD data structure.
Compiling graphical models into weighted AOMDDs also extends decision diagrams for the computation of semiring valuations [18], from linear variable ordering into treebased partial ordering. This provides an improvement of the complexity guarantees to exponential in treewidth, rather than pathwidth.
There are various lines of related research. We only mention here: deterministic decomposable negation normal form (d-DNNF) [7]; case factor diagrams [14]; compilation of CSPs into tree-driven automata [10]; and the recent work on compilation [16,4]. We think that our framework using AND/OR search graphs has a unifying quality that helps make connections among seemingly different compilation techniques.
The approach of compiling graphical models into AOMDDs may seem to go against the current trend in model checking, which moves away from BDD-based algorithms into CSP/SAT based approaches. However, algorithms that are search-based and compiled data-structures such as BDDs differ primarily by their choices of time vs memory. When we move from regular OR search space to an AND/OR search space the spectrum of algorithms available is improved for all time vs memory decisions. We believe that the AND/OR search space clarifies the available choices and helps guide the user into making an informed selection of the algorithm that would fit best the particular query asked, the specific input function and the computational resources. | 2012-06-20T08:02:53.000Z | 2007-07-19T00:00:00.000 | {
"year": 2012,
"sha1": "b9b64f0959ec51b11cc2c2a559be8e8e534f9832",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b35f801300ad04da402386d2a2631080f356bbe5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
8431424 | pes2o/s2orc | v3-fos-license | Gene Ontology annotation of the rice blast fungus, Magnaporthe oryzae
Background Magnaporthe oryzae, the causal agent of blast disease of rice, is the most destructive disease of rice worldwide. The genome of this fungal pathogen has been sequenced and an automated annotation has recently been updated to Version 6 . However, a comprehensive manual curation remains to be performed. Gene Ontology (GO) annotation is a valuable means of assigning functional information using standardized vocabulary. We report an overview of the GO annotation for Version 5 of M. oryzae genome assembly. Methods A similarity-based (i.e., computational) GO annotation with manual review was conducted, which was then integrated with a literature-based GO annotation with computational assistance. For similarity-based GO annotation a stringent reciprocal best hits method was used to identify similarity between predicted proteins of M. oryzae and GO proteins from multiple organisms with published associations to GO terms. Significant alignment pairs were manually reviewed. Functional assignments were further cross-validated with manually reviewed data, conserved domains, or data determined by wet lab experiments. Additionally, biological appropriateness of the functional assignments was manually checked. Results In total, 6,286 proteins received GO term assignment via the homology-based annotation, including 2,870 hypothetical proteins. Literature-based experimental evidence, such as microarray, MPSS, T-DNA insertion mutation, or gene knockout mutation, resulted in 2,810 proteins being annotated with GO terms. Of these, 1,673 proteins were annotated with new terms developed for Plant-Associated Microbe Gene Ontology (PAMGO). In addition, 67 experiment-determined secreted proteins were annotated with PAMGO terms. Integration of the two data sets resulted in 7,412 proteins (57%) being annotated with 1,957 distinct and specific GO terms. Unannotated proteins were assigned to the 3 root terms. The Version 5 GO annotation is publically queryable via the GO site . Additionally, the genome of M. oryzae is constantly being refined and updated as new information is incorporated. For the latest GO annotation of Version 6 genome, please visit our website . The preliminary GO annotation of Version 6 genome is placed at a local MySql database that is publically queryable via a user-friendly interface Adhoc Query System. Conclusion Our analysis provides comprehensive and robust GO annotations of the M. oryzae genome assemblies that will be solid foundations for further functional interrogation of M. oryzae.
Introduction
Magnaporthe oryzae, the rice blast fungus, infects rice and other agriculturally important cereals, such as wheat, rye and barley. The pathogen is found throughout the world and each year is estimated to destroy enough rice to feed more than 60 million people [1]. A comprehensive understanding of the genetic makeup of the rice blast fungus is crucial in efforts to understand the biology and develop effective disease management strategies of this destructive pathogen.
The rice blast fungus has been the focus of intense investigation and a number of genomic resources have been generated. These include a genome sequence [2], genomewide expression from microarray [3] and massive parallel signature sequencing (MPSS) [4], as well as large bank of T-DNA insertion mutants [5,6]. In addition, numerous genes have been functionally characterized by targeted knockout [7][8][9][10][11][12][13][14][15][16][17][18]. While these resources are of tremendous utility, much of the genome remains unexplored. Till now, only automated annotations of the predicted genes have been available. In order to maximize the utility of the genome sequence, we have developed manual curations of the predicted genes.
Providing functionality through careful and comprehensive application of a standardized vocabulary, such as the Gene Ontology (GO) requires manual curation. The GO has evolved into a reliable and rapid means of assigning functional information [19][20][21][22]. There are two types of GO annotations. One is referred to as similarity-based GO annotation, and the other is literature-based GO annotation. Similarity-based GO annotation applies computational approaches to match characterized gene products and their associated GO terms to gene products under study. Orthology-based GO annotation is a more stringent application of similarity-based GO annotation. Literature-based GO annotation involves reviewing published work and then manually assigning GO terms to characterized gene products. Currently, similarity-based GO annotation predominates since it is rapid and relatively inexpensive [21,23]. On the other hand, although literature-based annotation is time consuming, it is considered more reliable and provides a mechanism to assign previously unassigned GO terms or newly developed GO terms to proteins. Here, we present an overview of the strategy and results obtained from the integration of both approaches to assign GO terms to Versions 5 of M. oryzae genome.
M. oryzae genome sequence
This paper summarizes manual annotation of Version 5 of the M. oryzae genome sequence. At the time of submission of this manuscript, Version 6 of the genome assembly of M. oryzae was released by the Broad Institute. Version 6 will be annotated using the same methodology described here. A preliminary GO annotation of the Version 6 genome sequence, based on the Version 5 annotation, has been placed at our local MySQL database at http://scot land.fgl.ncsu.edu/smeng/GoAnnotationMagnaporthe gria.html.
Sequence similarity-based GO annotation
Step 1 Predicted proteins of Version 5 of the M. oryzae genome sequence were downloaded from the Broad Institute at http://www.broad.mit.edu/annotation/genome/ magnaporthe_grisea/MultiDownloads.html. GO-annotated proteins were downloaded from the Gene Ontology (GO) database at http://www.Geneontology.org/ GO.downloads.database.shtml. These GO-annotated proteins were from about 50 organisms with published association with GO terms. Only three of the 50 organisms are fungi. They are Candida albicans, Saccharomyces cerevisiae, and Schizosaccharomyces pombe. Other organisms are from bacteria, plants, or animals etc. Proteins of these non-fungal organisms were retained to increase the number of proteins with validated functions available for matching to M. oryzae.
Step 2 Possible ortholog pairs between GO proteins and predicted proteins from M. oryzae genome sequence Version 5 were estimated by searching for reciprocal best hits using BLASTP (e-value < 10 -3 ) [24].
Step 3 Significant alignment pairs with 80% or better coverage of both query and subject sequences, 10 -20 or less BLASTP Evalue, and 40% or higher of amino acid identity (pid) were manually reviewed.
Step 4 The functions of significantly matched GO proteins were manually cross-validated using data from wet lab experiments, and the NCBI Conserved Domain Database (CDD) [25].
Step 5 If the functions suggested from different sources were consistent with each other, and with available M. oryzae data, the functions of the experimentally characterized, significantly matched GO proteins, were transferred to the M. oryzae proteins in our study, and given the evidence code ISS (Inferred from Sequence Similarity) [26,27].
Step 5 The information was recorded into a gene association file following the format standard at http://www.geneontol ogy.org/GO.format.annotation.shtml.
Literature-based GO annotation
Step 1 Literature at public databases such as PubMed [a database of biomedical literature citations and abstracts at the National Center for Biotechnology Information (NCBI)] were searched using key words, including alternative species names for the organism such as Magnaporthe grisea and Pyricularia oryzae.
Step 2 Relevant published papers were read and genes or gene products and their functions were identified.
Step 3 Where necessary, gene IDs and sequences at public databases, such as the NCBI protein database were identified.
Step 4 Based on the functions identified in the paper(s), appropriate GO terms were found using AmiGO, the GO-supported tool for searching and browsing the Gene Ontology database.
Step 5 Evidence codes were assigned following the guide at http://www.geneontology.org/GO.evidence.shtml.
Step 6 Data were recorded into the gene association file manually or using custom PERL scripts for large gene sets with the same biological process.
Integration of the results from the two types of GO annotations
Step 1 Similarity-based annotations were replaced with literature-based annotations, where redundant, using custom PERL scripts.
Step 2 Custom PERL scripts were used to annotate each protein with GO terms from the three ontologies using the following protocol. Any protein not annotated with a GO term following similarity-based and literature-based GO annotations was annotated with the three root GO terms, GO:0005575 (Cellular Component), GO:0003674 (Molecular Function), and GO:0008150 (Biological Process). Additionally, if any protein was lacking annotation from any of the three GO categories, Cellular Component, Molecular Function, or Biological Process, the protein was annotated with the root GO terms of the missing GO categories.
Step 3 Errors in the gene association file were checked using the script, filter-gene-association.pl, which was downloaded from the GO database at ftp://ftp.geneontology.org/pub/ go/software/utilities/filter-gene-association.pl.
The gene association file for Version 5 of the M. oryzae genome sequence was uploaded to the GO database at http://www.geneontology.org/GO.current.annota tions.shtml. Many protocols and scripts were created for generating and parsing the data. For example, a protocol and five scripts were developed to replace redundant similarity-based annotation with literature-based annotation. Furthermore, a protocol and eight scripts were developed to provide each gene with a GO term from the three ontologies. In addition, a PERL script to record many genes into the gene association file was developed. This script, with slight modification, easily recorded different types of data, such as microarray expression, MPSS, or T-DNA insertion mutation, etc., into the gene association file. These protocols and scripts are available upon request from the corresponding or the first author.
Computational GO annotation
From the initial BLASTP analysis for reciprocal best hits, 6,286 (49% of the 12,832) predicted proteins were annotated with 1,911 distinct and specific GO terms out of a total of 29,126 assigned terms. Totally, 4,881 (78%) of the 6,286 proteins were considered to be significant matches to characterized GO proteins, with an E-value < 10 -20 and percentage of identity (pid) ≥ 40%. Furthermore, 4,535 (93%) of the 4,881 proteins were annotated based on highly significant similarities with E-values = 0 and pid ≥ 40% (see Figure 1 for details). The pairwise alignments of these significant matches were manually reviewed. Additionally, these high quality matches were cross-validated as follows: A total of 67 secreted proteins of M. oryzae was experimentally demonstrated to be secreted through cloning into an overexpression vector and expressed in M. oryzae transformants (Ebbole and Dean, unpublished data). These 67 secreted proteins were annotated with a biological process term GO:0009306 ("protein secretion") and a cellular component term GO:0005576 (extracellular region). An evidence code IDA was assigned to annotations of these 67 proteins since function was determined through direct assay.
A total of 128 curated cytochrome P450's of M. oryzae were validated by comparison and analysis of gene location and structure, clustering of genes, and phylogenetic reconstruction [28]. Different subsets of these proteins were annotated with different GO terms. For example, 75 of the 128 P450 proteins were annotated with the molecular function term GO:0005506 ("iron ion binding"), and 40 P450 proteins with the molecular function term GO:0016491 ("oxidoreductase activity"). An evidence code IGC was assigned to annotations of these P450 proteins since annotations were based on genomic context. A total of 428 putative transcription factors of M. oryzae were validated by integrated computational analysis of whole genome microarray expression data, and matches to InterPro, pfam, and COG [3]. Again, different subsets of the 428 proteins were annotated with different GO terms. For example, 36 proteins were annotated with GO:0005975 ("carbohydrate metabolic process"), and 12 proteins were annotated with GO:0006520 ("amino acid metabolic process"). An evidence code RCA was assigned to annotations of the 428 transcription factors since the annotations were based on reviewed computational analysis.
A total of 2,548 conserved domains from NCBI CDD were used as evidence for cross-checking putative functions, but no GO annotation was made based solely on identification of these domains.
In addition, the evidence code ISS was assigned to annotations of 216 M. oryzae proteins for the following reasons: 1) These proteins have significant similarity to experimentally-characterized homologs over the majority (at least 80%) of the full length sequences. 2) The pairwise alignments of good matches between the characterized proteins and the proteins of M. oryzae were manually reviewed. 3) Functional domains were conserved between the M. oryzae proteins and their homologs. 4) The GO assignments from the characterized match proteins to the M. oryzae proteins were manually determined to be biologically relevant.
The remaining 1,343 proteins with a reciprocal BLASTP best match of e-value > 10 -20 and pid < 40% were assigned GO terms from their characterized matches, but the evidence codes were identified as IEA (Inferred from Electronic Annotation).
In sum, GO terms were assigned to 6,286 proteins of M. oryzae. Among the 6,286 proteins, 2,732 hypothetical proteins, 125 predicted proteins, and 14 unknown proteins were assigned functions.
Literature-based GO annotation
More than 400 research articles were read, and 71 genes with gene knockout mutations and with accession numbers and sequences deposited in public databases such as NCBI were manually annotated using GO terms, including newly developed Plant-Associated Microbe Gene Ontology (PAMGO) terms. Gene products were annotated with GO terms relevant to their biological functions. For example, 6 genes were annotated with GO:0000187 ("activation of MAPK activity"), 5 genes with GO:0075053 ("formation of symbiont penetration peg for entry into host"), 14 genes with GO:0044409 ("entry into host"), 8 genes with GO:0044412 ("growth or development of symbiont within host"), and 43 genes with GO:0009405 ("pathogenesis"). The evidence code IMP (inferred from Mutant Phenotype) was assigned to these annotations since gene-knockout mutants were generated in order to determine functions of these genes.
Features of reciprocal best BLASTP matches between GO-annotated proteins and predicted proteins of Magnaporthe oryzae Figure 1 Features of reciprocal best BLASTP matches between GO-annotated proteins and predicted proteins of Magnaporthe oryzae. The vast majority of the matches to characterized proteins have high sequence identity over much of their length. Shaded grey bars indicate matches with a percentage of identity (pid) ≥ 40%, and shaded black bars indicate pid < 40%.
A total of 210 genes were annotated on the basis of published microarray studies [3]. Again, gene products were annotated with GO terms, including PAMGO terms, relevant to their biological functions. For example, 67 genes were annotated with GO:0044271 ("nitrogen compound biosynthetic process"), 27 genes with GO:0075005 ("spore germination on or near host"), 26 genes with GO:0075035 ("maturation of appressorium on or near host"), and 114 genes with GO:0075016 ("appressorium formation on or near host"). The evidence code IEP (Inferred from expression Pattern) was assigned to these annotations on the basis that the genes were up-regulated by at least 10-fold in association with the particular biological process. A further 2,433 genes were annotated on the basis of published Massively Parallel Signature Sequencing (MPSS) studies [4], including 1,041 genes annotated with GO:0043581 ("mycelium development"), and 1,392 genes annotated with GO:0075016 ("appressorium formation on or near host"). The evidence code IEP was also assigned to these annotations since the genes were up-regulated only during a certain biological process, such as mycelium formation, and the fold change was equal to or greater than 10.
On the basis of whole genome T-DNA insertion mutation data [5], 120 genes were annotated with relevant GO terms and PAMGO terms. For instance, 43 genes were annotated with GO:0030437 ("ascospore formation"), 14 genes with GO:0009847 ("spore germination"), 64 genes with GO:0075016 ("appressorium formation on or near host"), and 106 genes with GO:0009405 ("pathogenesis"). An evidence code IMP (inferred from mutant phenotype) was assigned to these annotations.
In total, 2,810 proteins were annotated based on experimental data from published peer-reviewed literature. Of these, 1,673 proteins were annotated with terms created by the PAMGO consortium to describe interactions between symbionts and their hosts.
Integration of results from the two types of GO annotations
Integration of the similarity-based and literature-based annotation resulted in 7,412 proteins being annotated with specific GO terms, covering more than 57% of the inferred proteome. The remaining 5,464 predicted proteins, not having high similarity to GO-annotated proteins, were annotated with three general GO terms. GO:0005575 (Cellular Component), GO:0003674 (Molecular Function), and GO:0008150 (Biological Process). Therefore, our GO annotation provides an annotation of the entire 12,832 proteins predicted in M. oryzae, and each protein being annotated with GO terms from the three GO categories.
Data availability
The GO annotation of Version 5 of the genome sequence of Magnaporthe oryzae is available at the GO Consortium database http://www.geneontology.org/GO.cur rent.annotations.shtml.
Discussion
Here, we present a detailed protocol for integrating the results of similarity-based annotation with a literaturebased annotation of the predicted proteome of Version 5 of the genome sequence of the rice blast fungus M. oryzae. Through careful manual inspection of these annotations, we are able to provide a reliable and robust GO annotation for more than half of the predicted gene products. Of 6,286 proteins receiving computational annotations, only 1,343 did not exceed our stringent match criteria upon manual review and so were assigned the evidence code IEA. It should be noted that annotations with the IEA evidence code are retained in the GO database for only one year, and then the GO Consortium will remove them from a gene association file. To be retained, IEA annotations must be manually reviewed in order to be assigned an upgraded evidence code such as ISS (Inferred from Sequence or Structural Similarity). Currently, there is no recognized standard to assign the ISS code. We recommend the following criteria for assigning the ISS code: • The functions of the proteins from which the annotation will be transferred must be experimentally characterized.
• The similarity between the characterized proteins and the proteins under study must be significant. For example, we used ≥ 80% coverage of both query and subject sequences, ≤ 10 -20 E-value, and ≥ 40% percentage of identity (pid) as cutoff criteria in our similarity-based GO annotation. Ideally, orthology should be established by phylogenetic analysis.
• The pairwise alignment between the characterized proteins and the proteins under study should be manually reviewed and cross-validated with characterized or reviewed data of other resources such as functional domains, active sites, and sequence patterns etc.
• Biological appropriateness of all assigned GO terms should be manually reviewed. | 2016-05-12T22:15:10.714Z | 2009-02-19T00:00:00.000 | {
"year": 2009,
"sha1": "4c6c99156c43994dc2d83357eb7caee8fca64b3a",
"oa_license": "CCBY",
"oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-9-S1-S8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d12b0c97d7d23fb941259fe77d7712012dbfd47e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
55264828 | pes2o/s2orc | v3-fos-license | How children living in poor areas of Dar Es Salaam, Tanzania perceive their own multiple intelligences
Abstract This study was carried out with 1,857 poor children from 17 schools, living in low-income areas of Dar Es Salaam, Tanzania. All children took the ‘Student Multiple Intelligences Profile’ (SMIP) questionnaire as part of a bigger project that gathered data around concepts and beliefs of talent. This paper sets out two aims, first to investigate the structural representation of the self perceived multiple intelligences for this set of children and second to discuss how the best fit model might reflect children’s culture and their school experiences. After carrying out exploratory factor analysis, a four factor first order model was shown to have a good fit. A higher order factor solution was investigated owing to the correlation of two latent constructs. In order to provide some insight into the multiple intelligences construct the relationship between the SMIP items, student test outcomes and attitudes to learning were examined. The item groupings were explored through African cultural beliefs around intelligences indigenous to African communities.
Introduction
The idea of a unitary intelligence is now generally viewed as inadequate with many assuming a broader perspective of intelligence (Gardner, 1983(Gardner, , 1999Karolyi, Ramos-Ford & Gardner, 2003;Renzulli, 1986Renzulli, , 1998Sternberg, 1985Sternberg, , 1986Sternberg, , 1997Sternberg, , 2000. Gardner (1983) initially identified seven intelligences: verbal-linguistic, music, logical mathematical, visual spatial, bodily kinaesthetic, intrapersonal, interpersonal-adding an eighth, naturalist, a few years later (Gardner, 1999). Gardner (2006a) states that the theory of multiple intelligences has attracted polarised views from 'extravagant praise' to 'arbitrary dismissals ' (p. 503). Indeed books and articles have been written critiquing the theory of multiple intelligences (MI) initiating in some respects responses from Gardner himself (Gardner, 1995(Gardner, , 1999(Gardner, , 2006bGardner & Moran, 2006;Schaler, 2006;Visser, Ashton & Vernon, 2006;Waterhouse, 2006). Those questioning MI suggest inadequate evidence and empirical support for the theory along with an inconsistency with cognitive neuroscience findings (Waterhouse, 2006). Others look at ABSTRACT This study was carried out with 1,857 poor children from 17 schools, living in low-income areas of Dar Es Salaam, Tanzania. All children took the 'Student Multiple Intelligences Profile' (SMIP) questionnaire as part of a bigger project that gathered data around concepts and beliefs of talent. This paper sets out two aims, first to investigate the structural representation of the self perceived multiple intelligences for this set of children and second to discuss how the best fit model might reflect children's culture and their school experiences. After carrying out exploratory factor analysis, a four factor first order model was shown to have a good fit. A higher order factor solution was investigated owing to the correlation of two latent constructs. In order to provide some insight into the multiple intelligences construct the relationship between the SMIP items, student test outcomes and attitudes to learning were examined. The item groupings were explored through African cultural beliefs around intelligences indigenous to African communities. problems regarding measurability (Visser et al., 2006). Gardner (2006a) himself concedes that tests need to be 'intelligence-fair' focusing on the intelligence that is to be measured (p. 504).
Regarding the assessment of MI, Gardner, Feldman & Krechevsky (1998) created 'Project Spectrum' in order to ascertain the intellectual profile of children in a manner that was as natural as possible. Alternative assessment was utilised to identify and evaluate student abilities using performance-based assessment with respect to MI theory. Similarly the 'Discovering Intellectual Strengths and Capabilities (DISCOVER) Projects' set out to measure three of the MI intelligences-spatial, logical mathematics and linguistic-through problem solving performance based assessment. Two of the other intelligences, inter and intrapersonal, were also assessed through observation (Maker, nielson &Rogers, 1994). udall andPasse (1993) developed the 'Multiple Intelligences Assessment Technique' (MIAT) to assess four of the multiple intelligences, through performance based activities, teacher ratings and observations. This technique was used in the project 'Support to Affirm Rising Talent' (START) (Plucker, Callahan & Tomchin, 1996). There are various self-reporting MI assessments including the 'Student Multiple Intelligences Profile' (SMIP) questionnaire and the 'Multiple Intelligences Developmental Assessment Scales' (MIDAS), which is used in career counselling (Chan, 2001;Shearer & Luzzo, 2009). Although Gardner believes that self-reporting may have problems concerning reliability he does not dismiss such assessments of MI stating that: much can be learned about how people conceive of themselves, and through comparisons of response patterns found among and across different groups of subjects. (Gardner, 2011, p. xiv) Some research has grouped these multiple intelligences (MI) into conceptual clusters (Bennett, 1996(Bennett, , 1997Chan, 2006;Furnham, 2001). Indeed, Campbell, Campbell & Dickinson (2004) classified Gardner's eight intelligences into three clusters-personal related, object related, and object free intelligences. nevertheless, however viewed or grouped, the theory of multiple intelligences (MI) 'provides one useful framework for understanding individuals' basic competencies, as well as their unique strengths' (Chan, 2006, p. 326). It also provides one possible strand of the identification process through the self-perception or self-estimation of one's intelligence (Bennett, 1996(Bennett, , 1997(Bennett, , 2000Furnham, 1999Furnham, , 2000Petrides & Furnham, 2000).
Although popular in other country settings no sub-Saharan Africa study has been found that uses a performance-based assessment. A couple of studies have been carried out in Africa around multiple intelligences that use the self-estimate approach. One study was undertaken in namibia, South Africa, Zambia and Zimbabwe regarding parental estimates of their own and their children's multiple intelligences . A total of 421 parents were asked to rate where they believed their own and their children's scores for seven intelligences lay along a normal distribution curve. The data show that namibians were more likely to give the lowest self-estimates and Zambians the highest self-estimates. Females gave higher self-estimates than males on all seven multiple intelligences. A similar study in nigeria and South Africa (comparing White and Black South Africans) used the same questionnaire around self and relatives' estimates of multiple intelligences (Furnham, Callahan & Akande, 2004). The research, in contrast to , found few gender differences in estimates. When looking at ratings of White and Black South Africans, White South Africans tended to rate their relatives more highly than Black South Africans. The SMIP however, has not been used in an African setting with adults or children in order to rate multiple intelligences. It has been used predominantly in China and Hong Kong.
African cultural practices, beliefs, attitudes, rituals, customs, values and communication styles all influence the definition, attributes and characteristics of the concept of intelligence (Mpofu, 2002(Mpofu, , 2004ngara & Porath, 2004). Concepts of intelligence can be based on what is socially meaningful, built on local, social and environmental conditions (Mpofu, ntinda & Oakland, 2012). ugandans have been shown to view intelligence as a social construct. The Shona of Zimbabwe regard intelligence as 'public-spirited' behaviour or achievements that could be beneficial to the group (Irvine, 1970(Irvine, , 1988Wober, 1974). For the ndebeles of Zimbabwe intelligence comprises wisdom, social responsibility, social constructive disposition, success in life, superior educational qualifications and abilities to problem solve (Mpofu, 1993(Mpofu, , 2004. Through villagers' responses from a rural community in eastern Zambia, Serpell (1977Serpell ( , 1993 found the Chewas divided intelligence into four indigenous constructs-wisdom, aptitude, responsibility and trustworthiness. Wisdom and aptitude represented the cognitive aspects, and responsibility and trustworthiness the social aspects of intelligence. Again with the Luo of Kenya the concept of intelligence could be divided into the two aspects of cognitive and social . There have been a number of studies in sub-Saharan Africa that have found that children's performance in village tasks valued by the community (e.g., herbal treatment for local common illnesses) are unrelated to academic achievement Serpell, 2007Serpell, , 1993Serpell, , 2011bSternberg et al., 2001). It is suggested that cognitive values among the Chewa and Luo communities differ from those promoted in schools. As highlighted in other literature, school programmes in sub-Saharan Africa generally conform to Western cognitive values (Kasfir, 1983;Mandaza, 1986;Serpell, 1993;Serpell & Boykin, 1994).
In developing countries typically teachers, government officials and district education officers believe that children from poor areas, who are first generation learners and with illiterate parents, are incapable of possessing talent, incapable of learning, having ability or achieving greatness (Dixon, 2012;Frasier, 1987;Humble, 2015;Iyer & nayak, 2009). These attitudes transfer into the classroom and thus the educational practices of teachers. In slums and low-income areas of sub-Saharan Africa children typically attend schools where rote learning is the order of the day (Dixon, Humble & Counihan, 2015;Hoadley, 2012;nomlomo & Vuzo, 2014;Tabulawa, 2013). Rote learning and teaching to the test make it easier for government schoolteachers who have, in the main, become demotivated and removed from their educationalist roles and responsibilities (Chireshe & Shumba, 2011;Kremer, Muralidharan, Chaudhury, Hammer & Halsey Rogers, 2006;Tooley, 2009). School learners are unaccustomed to being asked to use their imagination and think differently to others. Children are never asked to voice their own opinions or think for themselves; just regurgitate information provided by the teacher (Duflo, Dupas & Kremer, 2015;Kremer, Brannern & Glennerster, 2013). Communities value forms of ability that allow a person to meet their social obligations (nsamenang, 2006). One proxy of intelligence for the Shona and Chewa is life success, which could partly be gained by acquiring literacy skills (i.e., access to jobs, income, improved health and wellness). Therefore schooling is regarded as important in such communities with respect to the acquisition of such abilities (Dasen, 2011;Oslon, 1986;Stemler et al., 2009).
In this study we aimed to explore further the use of self-estimates with poor children in Tanzania regarding their multiple intelligences. We also set out to investigate whether their schooling experience or culture might influence their interpretation of the items. This led us to the following research questions: • What would be the structural representation of the self-perceived multiple intelligences for this set of children? • What insights could be given, if any to interpret the children's self-perception groupings given their school experiences and culture?
Method
To look at these questions the SMIP was given to children who were taking part in a project that gathered data-including background information, achievement outcomes, teacher, parent and pupil beliefs-around concepts of talent. The overall aim of the whole project was to identify and nurture talented children from poor areas of Dar Es Salaam, Tanzania. Chan (2001Chan ( , 2003 developed the 'Student Multiple Intelligences Profile' (SMIP) and subsequently a revised SMIP-24 (featuring 24 items incorporating eight subscales) based on Gardner's MI. In one piece of research the structure of perceived multiple intelligences was explored with 1,464 primary and secondary Chinese students who were judged to be gifted intellectually, academically or talented in a non-academic area (Chan, 2006, p. 328). Chan's findings showed there to be a two second order factor model with the eight intelligences grouping into two conceptual clusters of 'non personal' intelligences 1 and 'personal' intelligences. 2 Our paper utilised the results of the SMIP questionnaire to investigate the construct of the intelligence structure as applied to these African children. We initially used exploratory factor analysis (EFA) to explore the dimensionality of the SMIP and uncover the smallest number of interpretable factors needed to explain the correlations among them. The suggested empirical model was evaluated using confirmatory factor analysis (CFA) to investigate how well the pre-specified factor solution obtained from EFA reproduced the sample data matrix of the measured variables.
Participants
A total of 1,857 primary students living in poor areas of Dar Es Salaam in Tanzania were asked to complete a questionnaire and undertake tests as part of an Economic and Social Research Council (ESRC) funded project. Students in groups of 40-50 completed the tests/ questionnaires. Tests included a non-verbal ability test, mathematics, English reading and Kiswahili tests. The study also included teacher interviews, parent interviews and household surveys. 3 All students and their parents were informed through their schools that the purpose of the assessment exercise was to assess the strengths or talent areas of the students, that participation was voluntary, and that the results of the assessment would be kept strictly confidential and for research uses only (Humble, 2015). All grade 4 or 5 students in each of the 17 schools participated. no student declined participation, and complete data regarding the SMIP was obtained from 1,829 students. These students were aged between 8 and 15 years old (M=11.02, SD= 1.14).
Measure
This research used a translated version (into Kiswahili) of Chan's SMIP. This profile was intended to tap children's talent potentials in seven intelligences. Initially there were 27 items that were refined and modified after consulting various sources on multiple intelligences, including, notably, Armstrong's checklists (Armstrong, 1994). Seven part-time graduate students who were also full-time secondary school teachers were enlisted to help judge the item content and the appropriateness of these items in reflecting specific intelligences. Each of these teachers also administered the checklist to secondary students to obtain feedback on the ease or difficulty in responding to the checklist. Finally, based on the feedback from secondary students and teachers, 21 items (three items for each intelligence) were retained in the final version. The original 21-item checklist covered seven intelligences: verbal-linguistic, musical, logical-mathematical, visual-spatial, bodily-kinaesthetic, intrapersonal, and interpersonal intelligences (see Chan, 2001). The checklist was revised to include naturalist intelligence by adding three items adapted and modified from the checklist of Armstrong (1994), and these three items were included into the revised 24-item SMIP. According to Chan (2001) the subscales had sound psychometric properties including moderate internal consistency (Cronbach's α = 0.64 and 0.76) and significant correlations with external measures. This research utilised the SMIP self-perception questionnaire. The students rated the degree to which they perceived each of the 21 items on the checklist as descriptive of themselves using a five-point scale ranging from 1 (least descriptive of me) to 5 (most descriptive of me). The 21 items in our version of the SMIP questionnaire did not include the set on bodily-kinaesthetic. Children in government primary schools in Tanzania typically do not speak or read fluently in English. Therefore in order to address this, the SMIP was translated into Kiswahili. Initially this was piloted and any amendment regarding language usage was made at this stage, taking the advice of in-country educationalists. It is recognised that there could be potential effects when translating tests to be used cross-culturally. Everything was done to try to minimise such effects. The internal reliability of the new Kiswahili language SMIP form showed a Cronbach's α equivalent to 0.8, giving some evidence of reliability. In order for children with poor reading ability to still undertake the test, each item was read out in Kiswahili by the researcher, providing time after each item for children to complete the likert scale.
Procedure
Testing took part within the children's own class in their own schools. Letters were sent home and meetings arranged where requested to explain the project and the whole procedure that was to take place. Testing occurred in the morning for all participants. Education Masters students from the university College Dar Es Salaam administered the tests. They had been given special training from the research principal and co-investigators specifically for the project. This part of the overall testing procedure lasted for around 30 minutes.
Overview of analysis
First we conducted exploratory factor analysis (EFA) to evaluate the dimensionality of the data set of multiple indicators on the SMIP questionnaire to uncover the smallest number of interpretable factors needed to explain the correlations. The suggested underlying structure, which had been tentatively established from the EFA empirical analyses, was then subjected to a confirmatory factor analysis (CFA).
What would be the structural representation of the self-perceived multiple intelligences for this set of children?
Student Self-ratings on SMIP items. Exploring gender and age group differences the mean ratings of boys (n=872) and girls (n=957) and those of students of the younger age group (below 11 years; n=960) and older age group (age 11 and above; n=869) are computed separately and compared. Significant gender differences, after adjusting the level of significance for multiple comparisons using the Bonferroni procedure, are observed in three items of the questionnaire. Girls reported themselves as more verbose than boys (girls M = 3.66, SD = 1.38; boys M=3.44, SD = 1.45; t (1827) = 3.387, p<0.001) and rated themselves more highly for honesty and integrity (girls M = 3.61, SD = 1.3; boys M=3.49, SD = 1.33; t (1827) = 1.96, p<0.05). Boys reported themselves as more likely to play an instrument than girls (boys M=2.98, SD = 1.52; girls M=2.71, SD = 1.52; t (1827) = 3.899, p<0.001).
Considering the younger and older groups there is a significant difference where the younger group (aged under 11 years) reported they were more sensitive to others' feelings than the older group (younger M = 4.08, SD = 1.29; older M=3.95; SD = 1.41; t (1827) = 2.07, p<0.05) and more likely to be watching birds and animals (younger M= 3.67, SD = 1.4; older M = 3.49; SD = 1.47; t (1827) = 2.67, p<0.01). The mean and standard deviations were tabulated in order to look at the self-rating of the 1,829 students regarding the 21-item SMIP. These data are show in Table 1.
Factor models of perceived multiple intelligences. Exploratory factor analyses were undertaken in order to test for the smallest number of interpretable factors needed to explain the correlations in the 21 items of the SMIP questionnaire. Gender and age effects were not evaluated.
An initial estimation yielded four factors with eigenvalues exceeding unity, accounting for 40% of the total variance. The chi-squared values computed for the evaluation of the fit for one to seven factor solutions and the corresponding amount of total variance accounted for are summarised in Table 2. The results indicate that a statistically adequate solution, one that yields a non-significant chi-squared, would require a solution beyond the seven factor solution. However, taking 0.001 as the cut off criterion and the eigenvalue equal to one criterion, we submit that the four-factor solution could be regarded as an adequate representation of our data.
It can be noted from the scree test (figure 1) that the point where the graph changes shape and the substantial decline in the magnitude of the eigenvalues occurs is where there are four eigenvalues greater than one. As is pointed out by Gorsuch (1983) the scree test is a good indicator when the sample size is large. A four factor solution is also supported using the logic of the Kaiser-Guttman rule, when an eigenvalue is less than 1.0, the variance explained by a factor is less than the variance of a single indicator.
Structural representation of self perceived multiple intelligences. The factor structure uncovered in exploratory factor analyses is shown in Table 3. As can be seen there is a lack of simple structure within this four subscale factor model. Some of the items from the original seven subscales (musical, logical-mathematical and naturalist), as suggested by Gardner, do not appear in the same factor groupings. For example, three items that measure the self-rating of mathematical intelligence are found in three separate factors. This, at least from a Western perspective, does not seem to fit theoretically.
To provide further support for the four-factor model, we conducted a series of confirmatory factor analyses based on the factor structure uncovered in exploratory factor analysis as shown in Table 3. Confirmatory factor analyses were conduced on the total sample using the STATA package. A range of fit and comparison-based indices, including chi-square, was used to determine whether the variance of the intelligence model fitted these Africa data (Bentler, 1989;Browne & Cudeck, 1993;Steiger, 1990). The fit indices are shown in Table 4 and include Root Mean Square Error of Approximation (RMSEA), Standardised Root Mean Square Residual (S-RMR), Coefficient of Determination (CD), Tucker-Lewis Index (TLI) and Comparative Fit Index (CFI). Hu and Bentler (1999) suggest various cut-offs for these fit indices. To minimise Type I and Type II errors one should use a combination with S-RMR or the RMSEA. In general good models should have an S-RMR <0.08 or the RMSEA <0.06 with the fit index values > 0.9. According to Brown (2006) a range of CFI and TLI of 0.9 and 0.95 may indicate acceptable fit if other fit measures provide evidence of a good model fit. Regarding the RMSEA Brown states that a close fit is indicated by values less than 0.05 and an acceptable fit between 0.05 and 0.08. Summary of tests for invariance of the structure of four multiple intelligences are shown in Table 4. Model 2 shows the best fit with RMSEA and S-RMR < 0.06 and the CD, TLI and CFI > 0.9. It can be seen from Table 5 that the data fit the four factor model moderately well and that there are correlations among the dimensions. The most highly correlated dimensions were L1 and L2 (r=0.78).
Higher order factor solutions of perceived four intelligences. It was decided to investigate a second order factor model owing to the correlation of the latent constructs, L1 and L2. Table 6 shows a relatively good fit for the model and Figure 2 presents the hypothesised one-second order factor model visually. Two of the four proposed latent factors-L1 and L2-appear to be included under the one second order factor labelled H1. H1 has correlations with L3 and L4 of 0.81 and 0.71 respectively.
What insights could be given, if any to interpret the children's self-perception groupings given their school experiences and culture?
Relationship between the SMIP items and student test outcomes. In addition to the data around SMIP, student outcomes had also been gathered for these students on mathematics, English Table 4. Summary of tests for invariance of the structure of four-factor models using confirmatory factor analysis. reading and Kiswahili tests. The reading test used was the 'Single Word Reading Test' (national Foundation for Educational Research) and the mathematics test was made up of items taken from GMADE 1 to 4 (Pearson). In order to address issues around cross-cultural transportability of tests, pilots were carried out in Morogoro schools, west of Dar Es Salaam. Teachers and educationalists in nairobi devised the Kiswahili test. For all of the tests changes were made after the pilot through discussions and in collaboration with local teachers. 4 The analyses above regarding the EFA highlighted a lack of simple structure for the SMIP, within the seven multiple intelligences subscales as suggested by Gardner. Looking at how the children's test scores correlate with specific items may provide some explanation. Three of the self perception questions are given in each of the areas of verbal and mathematical intelligences. These are shown in Table 7, along with their correlations with children's test scores.
Lin1 'I enjoy talking and playing with words' , Mat1 'I actively search for patterns, causeeffect and logical relationships' , and Mat2 'I collect, categorise, study and analyse things' are not correlated to the areas of study that the multiple intelligence item purportedly links. The items Lin1 and Mat3 are not significantly correlated with any test; Mat1 is negatively significantly correlated to both Kiswahili and mathematics. A possible explanation is considered in the discussion below.
Children's self-perception groupings and cultural beliefs. Shavelson, Hubner & Stanton (1976) in their review of existing literature and instruments utilised to measure self-concept state that: Table 6. testing a higher dimensional model using second-order confirmatory factor analyses. note: fit indices are from Stata analyses. χ 2 = normal theory weighted Least Squares χ 2 ; rMSea = root Mean Square error of approximation; S-rMr = Standardised root Mean Square residual; tLi = tucker-Lewis index; cd = coefficient of determination, cfi = comparative fit index. ML = Maximum Likelihood confirmatory factor analysis *p < 0.001. In very broad terms, self-concept is a person's perception of himself. These perceptions are formed through his experience with his environment, and are influenced especially by environmental reinforcements and significant others. (p. 411) Our data suggest that we would agree. Indeed, through the multi-dimensionality of the project, household data were collected from a sample of parents who were asked to explain their understanding of intelligence. Typical of the responses were:
Fit
• 'Is innovative and creative and is inquisitively curious to know more'; • 'the talented student child has personal capacity'; • 'they make wise decisions'; • 'the child who is curious and creative'; • 'the child who is doing the right thing' .
The words 'curious' , 'wise' , 'inquisitive' , and 'do the right thing' (trustworthiness) all comply with findings around cultural concepts of intelligences in Africa. Our four-factor model solution (Table 5) as determined by CFA reveals a factor structure that could be interpreted as being consistent with African cultural beliefs. The ideas around cultural and environmental influences are taken further in the discussion below.
Discussion
This paper set out to investigate two areas of interest. First, to consider the structural representation of the self-perceived multiple intelligences for these 1,829 children using the SMIP, and second, once the best fit model was found, to discuss possible influences regarding the children's self-perceptions given their school experiences and culture. Campbell et al. (2004) classified Gardner's eight intelligences into three clusters-personal related, object related, and object free intelligences. Others in African settings have found that the concepts of intelligence are based around what is socially meaningful (Mpofu et al., 2012). What is regarded as beneficial for the community is often viewed as intelligence, not a 'set of cognitive abilities as highlighted by Western concepts' (Mpofu et al., 2012, p. 4). The Chewas of Zambia divided intelligence into four indigenous constructs-wisdom, aptitude, responsibility and trustworthiness-which could then be represented by two aspects-cognitive and social (Serpell, 1977(Serpell, , 1993. The same two aspects were found to be concepts of intelligence for the Luo of Kenya . In our research a four factor first order model was shown to have a good fit. The items making up each of the four factors are shown in Table 5. The combination of items within the first factor (L1) may not be surprising in an African setting as social propinquity and the social timeliness with which a child responds to collective needs with others is a valued cultural behaviour in sub-Saharan culture (Mpofu, et al., 2012;Serpell, 2011a, b). The second factor (L2) contains items that could infer the ability to be curious about nature and/or interacting with one's environment and the community. The combination of items in this factor highlights the awareness of self and others, and awareness of nature and environment. In some African countries such as uganda and Zimbabwe, intelligence is viewed as a social construct built on local, social and environmental conditions Mpofu et al., 2012;Sternberg et al., 2001). The third factor (L3) puts together items that suggest creativity, inquisitiveness, aptitude and curiosity. The Chewa of Zambia indeed regard aptitude and wisdom as two of the indigenous intelligence constructs (Serpell, 1977(Serpell, , 1993. The fourth factor (L4) is more difficult to define, made up of musical and one dominant visual spatial item. The one factor second order model (as shown in Figure 2) highlights an overall awareness intelligence having correlation with the L1 and L2. This could lead us to postulate that the findings broadly agree with the literature that intelligence in sub-Saharan Africa is regarded essentially as a social construct, and is socially oriented behaviour that benefits a collective society (Dasen, 2011;Grigorenko et al., 2001;Irvine, 1970Irvine, , 1988Mpofu et al., 2012;Wober, 1974).
The pedagogical approach of teaching in sub-Saharan African schools is typically by rote. According to Gardner (1999) an education system that teaches and assesses children utilising one method is 'unfair' . Such a strategy would only work if everyone had the same mind and one kind of intelligence. Gardner has suggested teachers should consider using different teaching strategies so students can learn through their own individual strengths. Although schools cannot teach intelligence they can develop intelligences. Gardner provides the example of the Suzuki violin method where the child who 'is devoting many hours each week to a single kind of pursuit and to the development of a single intelligence' is doing so 'at the cost of stimulating and developing other intellectual streams' (p. 396). The MI theory has important implications for classroom instruction and procedures, which in turn impact on the facilitation and development of the full spectrum of students' intelligences (Zhang & Chan, 2010). In order to provide children with more opportunities to learn through their strengths, Gardner developed the entry point approach suggesting seven entry points aligned to different intelligences. An entry point puts the child directly at the centre of a topic, stimulating their interests and therefore further exploration (Zhang & Chan, 2010). Children in a Tanzanian classroom are never asked to voice their own opinions or think for themselves but just regurgitate information provided by the teacher.
The children's mathematics experience in school would be focused on number only. The memorisation of answers and algorithms that can be recalled during tests and examinations make up mathematics lessons. In language classes, texts will be rote learnt and answers to comprehension memorised. This could bring problems regarding the SMIP items within both the verbal linguistic and logical mathematical intelligences. The concept that mathematics is about collecting and categorising, exploring patterns and looking for relationships, as well as logical reasoning and critical thinking are notions that these children will never have experienced. The opportunity to write creatively or discuss ideas or thoughts never occurs in class. Thus, one possible reason for the low correlations within these subscales is the disassociation between the item and the children's experiences around these subjects in school. The children may link their abilities to contexts without generalising them to their own personal qualities Mpofu, Myambo, Mogaji, Mashego & Khaleefa, 2006;Serpell, 2011a, b;Sternberg et al., 2001). Another possible reason for the low correlations is that the schools are not developing the children's intelligences. Assessment results not only illustrate how children understand or perform but may also reflect the quality of the current instruction (Zhang & Chan, 2010).
Future studies could consider qualitative inquiry to unravel how these learners interpret this abstraction task. Findings could inform the design or selection of a more credible abstraction task for the Kiswahili speaking learners. In order to consider the structure of perceived multiple intelligences in African settings it may be beneficial to use indigenous tasks for assessing and providing further insight into the structure of multiple intelligences. When using self-concept measures in different cultural contexts it is important to note the influences of environmental reinforcements. | 2018-12-12T07:26:27.057Z | 2016-03-03T00:00:00.000 | {
"year": 2016,
"sha1": "9d513db4e9699e0ce11877f0c3fc8dc8180da443",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/03054985.2016.1159955?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "f128b56acee165bde054a015d5150c2ff116c8e6",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
209377446 | pes2o/s2orc | v3-fos-license | Numerical Investigation of Intra-abdominal Pressure Effects on Spinal Loads and Load-Sharing in Forward Flexion
The intra-abdominal pressure (IAP), which generates extensor torque and unloads the spine, is often neglected in most of the numerical studies that use musculoskeletal (MSK) or finite element (FE) spine models. Hence, the spinal loads predicted by these models may not be realistic. In this work, we quantified the effects of IAP variation in forward flexion on spinal loads and load-sharing using a novel computational tool that combines a MSK model of the trunk with a FE model of the ligamentous lumbosacral spine. The MSK model predicted the trunk muscle and reaction forces at the T12-L1 junction, with or without the IAP, which served as input in the FE model to investigate the effects of IAP on spinal loads and load-sharing. The findings confirm the unloading role of the IAP, especially at large flexion angles. Inclusion of the IAP reduced global muscle forces and disc loads, as well as the intradiscal pressure (IDP). The reduction in disc loads was compensated for by an increase in ligament forces. The IDP, as well as the strain of the annular fibers were more sensitive to the IAP at the upper levels of the spine. Including the IAP also increased the ligaments' load-sharing which reduced the role of the disc in resisting internal forces. These results are valuable for more accurate spinal computational studies, particularly toward clinical applications as well as the design of disc implants.
INTRODUCTION
Quantifying the contribution of the active and passive components of the human trunk during various daily, occupational, or athletic activities is essential for the design of effective spinal fixation systems, and would greatly benefit research and clinical stakeholders in the field of spinal biomechanics. Intra-abdominal pressure (IAP), considered as the most likely factor to influence lumbar spinal mechanics, has been continuously investigated under static and dynamic lifting conditions for many decades now (Davis, 1956;Bartelink, 1957;Davis and Troup, 1964;Andersson et al., 1976;McGill et al., 1990;Marras and Mirka, 1996;Hagins et al., 2004). Most of the existing studies advocate that the IAP produces an extensor torque (Bartelink, 1957;Morris et al., 1961), which reduces the spinal loads and back muscle activity, hence influencing the overall loading scenarios and stability of the lumbar spine Thorstensson, 1997, 2003;Cholewicki and Reeves, 2004). This mechanism has also served as a solution to the existing paradox in biomechanical models where the predicted spinal loads exceeded the tissue-tolerance limits during weight lifting tasks (Chaffin, 1969). Abdominal belts have therefore been prescribed therapeutically to increase the IAP and unload the spine (Harman et al., 1989;Lander et al., 1992).
On the other hand, some experimental studies have questioned the unloading role of the IAP. Nachemson et al. (1986) found that an increase in the IDP is associated with a concurrent increase in the IAP during Valsalva maneuvers. It has also been reported that trunk muscle contraction is coupled with the generation of IAP (Cholewicki et al., 2002), where EMG activity of 12 trunk muscles increased due to the elevated IAP (Cholewicki et al., 1999). McGill and Norman (1987) and McGill et al. (1990) concluded that the IAP-generated extensor moment is compensated by the flexor moment due to the co-contraction of the abdominal muscles associated with the elevated IAP. In addition, the cross-section area of the diaphragm and the moment arm of the net IAP have been considered as the reason for overestimating the extensor moment produced by the IAP (McGill and Norman, 1987;McGill, 1993).
Uncertainty about the pattern of abdominal muscle coactivity, along with elevated IAP, have hence led to the controversy surrounding the unloading role of the IAP (Arjmand and Shirazi-Adl, 2006a;Stokes et al., 2010). Although, some studies suggested that the normal physiological role of the IAP cannot be adequately explored in contrived experiments, such as the Valsalva maneuver or maximum voluntary strength exertions (Arjmand and Shirazi-Adl, 2006a;Stokes et al., 2010).
Due to the inherent complexity of the spine and its structural components, both morphologically and mechanically, numerous musculoskeletal (MSK) rigid body models, analytical and computational models, have emerged as effective tools for the assessment of the relationship between the elevated IAP, and trunk spinal load and stability. Stokes et al. (2010) confirmed the unloading role of the IAP using a biomechanical model with detailed abdominal wall structure and muscle paths. Later, they revealed that pressurization of the abdomen increases lumbar spine stability, although the degree of spinal stability was not significantly affected by selective activation of either transversus abdominis or oblique muscles (Stokes et al., 2011). The computational studies conducted by Arjmand and Shirazi-Adl (2006a) and Park et al. (2013) revealed that IAP reduced the spinal joint forces during weight bearing standing position if no abdominal muscle co-activation is considered. They also demonstrated that the unloading and stabilizing action of IAP is both posture and task specific (Arjmand and Shirazi-Adl, 2006a). The shared limitation of the aforementioned studies is that all of them used a prescribed IAP when quantifying its effects on spinal loads.
More recently, Arshad et al. (2016) explored the effects of the IAP and spinal rhythm on the spinal loads in flexion using AnyBody (AnyBody Technology, Aalborg, Denmark), where IAP could be increased based on the optimization of the total muscle stress. While previous experimental/computational investigations of the IAP effects on muscle forces (Hodges et al., 2001;Arshad et al., 2016), on spinal loads (Daggfeldt and Thorstensson, 2003;Arshad et al., 2016), and on spinal stiffness (Hodges et al., 2005) have greatly contributed to spinal biomechanics, the influence of the IAP on the IDP and spinal load-sharing remain undetermined during static flexion. This knowledge is critical for various clinical applications, including informing the design of disc implants, and shedding more light on the elusive pathophysiology of low back pain and other spinal disorders. The current research, thus, aims to first delineate the modeling of the IAP in a MSK model, and secondly to quantify the effects of the IAP on muscle forces, IDP, and spinal load-sharing in the lumbosacral spine during forward flexion. This is accomplished using our combined MSK and FE modeling methodology, previously validated and published (Liu et al., 2018).
Musculoskeletal Model
An AnyBody MSK model (Ver. 6.0, AnyBody Technology, Aalborg, Denmark, model version 1.63) was developed and used to simulate the musculoskeletal biomechanics of a typical male of 70 kg weight and 168 cm height subjected to 60 • forward flexion, with and without IAP. The model is composed of the skull, cervical region, upper arms, rigid thorax (T1-T12 as a single segment) and five rigid lumbar vertebrae (L1-L5) together with the pelvis and sacrum. The Anterior Longitudinal Ligament (ALL), Posterior Longitudinal Ligament (PLL), Intertransverse Ligament (ITL), Ligamentum Flavum (LF), Supraspinous Ligament (SSL), and Interspinous Ligament (ISL) and Capsular Ligament (CL) were all incorporated in the model and modified to match the corresponding properties in our validated published FE model (Liu et al., 2018). The ligament forces were set to zero in the neutral standing position. The facet joint contacts were also activated during simulation.
An optimization algorithm in AnyBody based on muscle recruitment criterion was employed to calculate the load distribution among the various muscle groups. The objective function (1) used in the muscle recruitment optimization routine was to minimize the sum of the square of the ratios of muscle force to muscle strength (de Zee et al., 2007).
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org LGPT, longisimus thoracis pars thoracic. Local muscles: ICPL, iliocostalis lumborum pars lumborum; LGPL, longisimus thoracis pars lumborum; PM, psoas major; MF, multifidus; QL, quadratus lumborum. Where: f i is the force in muscle i, N i is the strength of muscle i, n is the total number of muscles. The abdominal cavity was simulated using a cylinder with maximum pressure equal to 26.6 kPa (Essendrop, 2003). The IAP model is mainly composed of one rigid buckle that provides attachments to the abdominal muscles (EO, IO, RA) and five rigid artificial disks forming structure for the transversus muscles which are responsible for generating IAP (Figure 2A). The buckle and artificial disks are driven by kinematics of the thorax, lumbar spine, and pelvis. The abdominal muscles (EO, IO, RA) and five artificial supporting muscles connecting artificial disks and buckle, are responsible for maintaining equilibrium of the buckle ( Figure 2B). The supporting muscles push the artificial disks ( Figure 2B) which activates the transversus muscles to maintain the equilibrium of the buckle. The activated transversus muscles attached to the artificial disks will control the anterior-posterior movement of the artificial segments ( Figure 2B). This movement together with the distance between thorax and pelvis, which will change the radius (R) and height (H) of the abdominal cavity (cylinder), respectively, contribute to the volume change of abdominal cavity, and their relationship can be expressed using Equation (2).
where V, is the volume of the cylinder, V 0 is the initial volume of the cylinder, R represents the radius of the cylinder at each artificial disk, and H is the height of the cylinder. Finally, the change in abdominal cavity will activate the IAP, which is modeled as an artificial muscle with strength equivalent to the maximum abdominal pressure, to balance the transversus muscle forces and establish equilibrium ( Figure 2B). In other words, any change in these supporting muscles will affect the force in the transversus muscles which in turn will influence the IAP. This pressure will then act on the nodes defined on the thorax and pelvis as concentrated forces (Figure 2A). All muscles used in the model of IAP are governed by the optimization function used for the entire MSK model. The range of IAP values were approximated to vary between 0.1 and 5.7 kPa from neutral standing to forward flexion (60 • ) (Schultz et al., 1982). The lumbo-pelvic ratio and lumbar rhythm were selected based on published experimental data (Granata and Sanford, 2000;Arjmand and Shirazi-Adl, 2006b). The muscle forces, and joint forces at the T12-L1 junction predicted by the MSK model together with the gravitational forces were input into our previously developed and validated FE model to predict the IDP, disc forces and moments, and spinal load-sharing.
Finite Element Model
Geometry of the lumbosacral vertebrae (L1-S1) in neutral standing posture was exported from the MSK model to create the FE model after detailed cleaning of spikes and sharp edges using Geomagic software (Geomagic Studio 2014, 3D System, USA). Geometry meshing was conducted using the software Hypermesh (Hyperworks 14.0, Altair, USA). The adjacent endplates were first meshed using 4 node shell element and then extruded to create 7 layers of 8-node brick element to create the intervertebral disc which included annulus fibrosis and nucleus pulposus with volumes equal to 56 and 44% of the disc volume, respectively (Schmidt et al., 2007;El-Rich et al., 2009). Non-linear springs, distributed in concentric lamellae with a crosswise pattern close to ±35 • , were used to model the annular fibers (Schmidt et al., 2007;El-Rich et al., 2009). The cortical bones were meshed with 3-node shell element and filled with 4-node solid elements to simulate the cancellous bone. Five pairs of frictionless surface-tosurface contact were created between adjacent facets with a gap of 1.5 mm along L1-S1 levels. In addition, seven types of ligaments were modeled as non-linear springs having the same non-linear behavior and the same insertion and origin points as those of the MSK model and resisting only tension forces. The material properties used in the FE model are summarized in Table 2.
Five FE models of the L1-S1 functional spinal units devoid of ligaments and facet joints were subjected to pure moments of 7.5 Nm in flexion and extension to predict the flexural stiffness of the intervertebral discs. These non-linear stiffness curves were used in the MSK model to simulate the spherical joints.
The joint forces, ligament forces, facet joint forces (null in both upright and forward flexion postures in this simulation) and muscle forces predicted at the junction T12-L1 together with muscle forces at all spinal levels of the MSK model were applied to the FE model. The resultant reaction force (shear and compression) at T12-L1 joint, however, was substituted by a sagittal translation applied in the direction of the reaction force to correct the small discrepancy between the deformed position predicted by the MSK model and the one resulted from the FE model. This discrepancy is due to the difference in the approaches used to model the disc in both models, and this iteration process was performed until the reaction force generated by sagittal displacements in FE model was almost equal (within predefined tolerance) to its counterpart predicted by the MSK model under the same posture. The gravitational force of each vertebra was also applied to the FE model. The sacrum was tilted according to the lumbo-pelvic rhythm used in the MSK model and then it was fixed throughout simulation.
Simulated Tasks
Forward flexion (60 • ) posture was selected to investigate the influence of the IAP on muscle forces, spinal loading and load-sharing. The IAP was activated (IAP_ON) and deactivated (IAP_OFF) by setting the IAP (artificial muscle activity) to normal and zero, respectively (Arshad et al., 2016). During flexion, the arms were always kept parallel to the direction of gravity.
IAP
The IAP model in the MSK model was validated by quantitatively comparing the predicted IAP values to in-vivo experimental data measured in upright and 30 • forward flexion postures with hands raised horizontally in front of the thorax (Figure 3A; Schultz et al., 1982). In agreement with the experimental findings, the model revealed a significant increase in the IAP from the neutral standing posture to 30 • forward flexion ( Figure 3A). The predicted IAP was 2.7 kPa, which is 1.3 kPa higher than the value reported by Schultz et al. (1982), while in forward flexion posture, the model predicted an IAP of 5.1 kPa, which is 1.3 kPa greater than its counterpart measured experimentally. These Frontiers in Bioengineering and Biotechnology | www.frontiersin.org discrepancies could result from the inter-individual variability and the differences in the methods used to measure the IAP. Results of the simulated postures (upright and 60 • forward flexion with arms parallel to the gravity direction) revealed an increase of IAP from 0.1 to 5.7 kPa as the trunk flexed during the entire simulation ( Figure 3B). The magnitude of 0.1 kPa in the neutral standing posture agreed with its counterpart (0.2 kPa) in literature (Andersson et al., 1976).
Muscle Force
The sum of the global and local muscle forces, with and without the IAP, were predicted using the MSK model ( Figure 4A) as the lumbar spine flexion varied from 0 to 60 • . In the neutral standing posture, the total local muscle force was predicted at ∼179 N, which was 27 N higher than the results from the model without IAP. In contrast, the total global muscle force was 78 N at the same posture, which was 17 N lower as compared with the alternate model settings. Both global and local muscle forces increased substantially with the inclination of the trunk to reach 961 and 1185 N, respectively, when the IAP was excluded. Activation of this latter in the MSK model reduced the total global muscle force substantially along with the inclination of the trunk. This reduction reached 37% at 60 • flexion. The total local muscle force decreased as well. However, the reduction started at 40 • and reached its maximum value of 6.5% at 60 • flexion.
The total force of each individual muscle group was predicted at the maximal trunk inclination (Figure 4B). The pronounced unloading effect of IAP was observed for almost all muscle groups, except for the Psoas Major (PM) and the Rectus Abdominis (RA), which remained silent regardless of the IAP settings.
In the local muscle group, the MF muscle contributed the most at 60 • forward flexion, reaching 423 N, followed by the ICPL and LGPL, whose values were 379 and 349 N, respectively. The QL muscle produced the smallest force (34 N). In the global muscle group, the LGPT produced the greatest force (504 N) followed by the ICPT muscle (261N). The force in the abdominal muscles did not exceed 68 and 129 N in the EO and IO muscles, respectively. These values correspond to the case of deactivated IAP. Including IAP in the model did not change the muscle forces pattern. However, it clearly reduced the force in all muscles particularly in the QL muscle and the global extensors LGPT and ICPT where the drop reached 52, 46, and 40%, respectively. The maximum decrease of the force in the remaining extensor and abdominal muscles did not exceed 12%.
Annular Fiber Strain
High tensile fiber strain was produced at the innermost lamellae at either the posterior or anterior or both regions, except at the L1-2 level, regardless of the existence of IAP. In the presence of the IAP, predicted high tensile strain in the collagen fibers was observed in the anterior region of the innermost lamella at L2-3 level. This high strain was then transferred to the posterior region of the innermost lamella at L3-4 level. High tensile stain in both anterior and posterior regions of the lamella was also observed at the L4-5 level. This trend became more pronounced at the L5-S1 level.
In contrast, the proportion of high tensile strain increased in the corresponding area of the lamellae for all discs, except at the L5-S1 level in the absence of IAP effects. A noticeable reduced proportion of high tensile strain, however, was produced at the L5-S1 under the same IAP condition (Figure 5).
Variation of the annular tensile strain due to the inclusion or not of IAP (shown on the right end of Figure 5) was calculated as the strain of the model with no IAP minus it counterpart of the model with IAP. The maximum positive variation occurred in the lateral left and right regions of the lamella of disc L1-2, and in the innermost region of the lamella for the remaining FIGURE 5 | Annular fibers strain at all levels (L1-S1) predicted by the FE model at 60 • forward flexion with both IAP settings. Variations were calculated with respect to the case with IAP activated (FLX-IAP_ON). levels. The region of maximum variations decreased from upper to lower levels of the spine (Figure 5). The minimum variation corresponding to the case where the model with IAP predicted higher tensile strains, occurred in the posterior outermost region of the lamella at L2-3 level and in the posterior innermost region of the lamella at L3-S1 levels. The area of the minimum variations increased gradually from middle to lower levels of the spine.
IDP
The IDP was calculated by averaging the pressure in all elements of nucleus (Naserkhaki et al., 2016;Liu et al., 2018) and exhibited the same pattern at all lumbar levels (L1-5) with or without accounting for the IAP (Figure 6). On the other hand, a noticeable decrease in the IDP was observed in the presence of IAP at all levels except the L5-S1 level. The greatest drop occurred at the L1-2 level and reached 26% while the magnitude of IDP remained almost unchanged at the L5-S1 level.
Disc Forces and Moment
The disc compressive force followed the same pattern, a decrease from the L1-2 level to the L2-3 level followed by an increase along the lower levels, in both cases, with and without the IAP. Activating this latter reduced the compressive force at all levels. The decrease ranged from 15 to 32% at the levels L5-S1 and L1-2, respectively. When the IAP was active, the disc shear force reduced by 24 and 28% at the L5-S1 and L2-3 levels, respectively. However, the L3-4 and L4-5 levels experienced an increase of 5 and 33%, respectively and the shear force changed direction from anterior to posterior at the L1-2 level ( Figure 7A). The disc moment also dropped along the spinal levels except at the T12-L1 and L5-S1 levels when the IAP was included (Figure 7B). The greatest change was 31% and occurred at the level L2-3.
Ligaments Forces
Activating the IAP increased the force in all ligaments significantly. The highest increase was found in the PLL (from 0 to 5 N) and CL (from 40 to 140 N) ligaments ( Figure 8A). The ALL ligament experienced zero force in both IAP settings.
Spinal Load-Sharing
In the absence of IAP, the compressive force was resisted mostly by the disc while the ligament contribution did not exceed 5%. The ligaments had also minor contribution (<14%) to resist shear force and moment as compared to the discs except at the L5-S1 level where they carried about 41% of the moment. The facet joints had no contribution at all to load-sharing ( Figure 8B).
Activating the IAP, increased the role of the ligaments in carrying compressive and shear forces, as well as moments. The increase of the ligament contribution to moment resistance was substantial at all spinal levels. For instance, the ligament moment-sharing jumped from 14 to 60% and from 5 to 32% at the L1-2 and L2-3 levels, respectively. The facet joints remained silent in all cases.
DISCUSSION
Despite the ongoing debate regarding which abdominal muscle is responsible for raising the IAP (Daggfeldt and Thorstensson, 2003;Cholewicki and Reeves, 2004), the role of this latter in unloading and stabilizing the lumbar spine has been established in the past few decades Thorstensson, 1997, 2003;Cholewicki and Reeves, 2004;Arjmand and Shirazi-Adl, 2006a;Stokes et al., 2010;Park et al., 2013), and is well-accepted within the spinal biomechanics community. The influence of the IAP on spinal load-sharing, however, remains not well-studied. This work attempted to quantify these effects during static forward flexion (60 • ), a posture associated with high abdominal muscle activity (Cresswell and Thorstensson, 1989), using our previously developed and validated method that combines MSK and FE models to predict muscle, ligaments, and discs forces, and moments as well as IDP and spinal load-sharing.
As a submodel of our current MSK model, the IAP was compared with in-vivo experimental data, quantitatively presenting an overall good match during neutral standing and forward flexion (Schultz et al., 1982; Figure 3A). In addition, the predicted IAP ( Figure 3B) in the neutral standing posture was quite close to its literature counterpart (Andersson et al., 1976). Other experimental data, which have been obtained during valsalva maneuvers or maximum voluntary strength exertions (Nachemson et al., 1986;Cholewicki et al., 1999Cholewicki et al., , 2002, were not compared here since they were not considered as realistic representatives of the IAP role in static postures (Arjmand and Shirazi-Adl, 2006a;Stokes et al., 2010).
In alignment with previous studies (Arjmand and Shirazi-Adl, 2006a;Arshad et al., 2016), our results revealed that the inclusion of the IAP in the MSK model leads to a decrease in muscle forces, which is more pronounced in the global muscle group at larger flexion angles (Figures 3, 4). More specifically, the forces in two global muscle groups: the iliocostalis lumborum pars thoracic (ICPT) and the longissimus thoracis par thoracic (LGPT), decrease substantially in the presence of the IAP. This also confirmed that the IAP could produce an extensor moment, which reduces the activity of the erector FIGURE 7 | Disc compressive and shear forces (+ve in anterior direction) (A) and disc moments (+ve in flexion) (B) at 60 • forward flexion predicted by the FE model. spinae muscles and, thus, alleviates spinal loads (Bartelink, 1957;Daggfeldt and Thorstensson, 2003). In addition, such significant decrease confirms the hypothesized unloading role of the IAP and stresses the importance of its incorporation in simulation models of the lumbar spine, particularly when subjected to forward inclination (Cholewicki and Reeves, 2004). The unloading role of the IAP in flexion can also be confirmed by the predicted disc force and moment. In the presence of the IAP, the compressive force decreases up to 434 N (31%) at all levels, while a maximum reduction of 208 N (24%) in the shear force occurs. A maximum decrease up to 5 N.m (32%) in disc moments at the L1-5 levels is also found, which is in agreement with previous work (Daggfeldt and Thorstensson, 2003). The reduction in the disc loads due to activating the IAP is compensated for by an increase in ligaments forces to maintain the equilibrium at the same deformed posture, i.e., under similar loading conditions. This confirms that neglecting the IAP in spine biomechanics studies would underestimate the role of the ligaments and potentially yield unrealistic predictions of disc forces and moments.
The variations among the annular fibers strain between the two cases studied (IAP_ON and IAP-OFF) were little. A small increase in the proportion of the high tensile strain fibers was observed at the L1-4 levels in the model with no IAP, which is mainly due to the increase in the muscle (global and local) forces applied to the FE model, which in turn increased the IDP.
It is noteworthy that the IDP decreased at all levels except the L5-S1 level, which confirms again the previously mentioned hypothesized unloading role of the IAP. An increase up to 0.5 MPa in the IDP was observed at the L2-3 level without consideration of IAP effects. The reduction of the IDP was smaller at the lower levels L3-5, in agreement with Hodges et al. (2005) who found that the IAP has more effects on the L2 vertebra as compared to the L4.
Load distribution among the various passive components is markedly altered in the presence of the IAP. Our results confirmed that the main contribution of the disc is to resist external load in forward flexion, which is more pronounced without IAP simulation. The disc force-and moment-sharing varied between 86 and 100% of the total spinal force and moment, except at the L5-S1 level, where the ligaments momentsharing reached 40%. Including the IAP alleviated the disc load and increased the ligament load-sharing, particularly the moment sharing.
Model Assumptions and Limitations
The current MSK model predicted the IAP based on the change of the abdominal cavity volume during forward flexion, rather than using typical prescribed experimental IAP values available in literature (Cholewicki et al., 1999;Arjmand and Shirazi-Adl, 2006a;Stokes et al., 2011). The model also considered the interaction between abdominal muscles, physiological cross section area and strength of these muscles. The transversus muscle, considered as a significant contributor to the rise in the IAP (Cresswell et al., 1992;Cresswell, 1993), was also included in the IAP model. Setting the IAP (artificial muscle activity) to zero (Arshad et al., 2016) in order to switch it off in Anybody did not eliminate the force in the abdominal muscles (EO and IO), as these muscles are attached to the buckle and artificial disks and contribute to their equilibrium ( Figure 2B). Similar kinematics were considered in both IAP settings, and no coactivity antagonism was simulated in this study. Although it is established that trunk stability is intimately associated with the elevated IAP, this was not taken into consideration in the current study. This is due to the fact that daily flexion is regarded as a skill posture (de Zee et al., 2007), which has been widely investigated using optimization models (El-Rich et al., 2004;Shirazi-Adl, 2005, 2006a;Stokes et al., 2010;Park et al., 2013). By minimizing the overall muscle stress, activation of muscles and spinal loads may have been underestimated as compared with realistic loads. Had the activation of muscle pattern changed, the effects of the IAP would have need to be re-evaluated (Arjmand and Shirazi-Adl, 2006a). Other limitations related to methodology are mentioned elsewhere (Liu et al., 2018).
CONCLUSIONS
In summary, the current research investigated the influence of the IAP on muscle forces, loads in the passive spinal structures, as well as load-sharing during forward flexion using a previously validated tool that combines a MSK of the upper body and a FE model of the lumbosacral spine. In alignment with literature, this study confirmed the unloading role of the IAP during upper body inclination. The IAP had significant influence on global muscle forces, yet, negligible effects on local muscle forces. The substantial increase in the IDP, internal disc force and load sharing, triggered by absence of the IAP, should be taken into consideration in future modeling efforts of the lumbar spine in flexion postures. This is the first study to the best knowledge of the investigators that attempts to quantitatively assess the role of the IAP on detailed spinal biomechanics. Such information is essential for the accurate modeling of the spine toward more effective therapeutic and rehabilitative modalities, as well as the design and development of artificial implants. | 2019-12-17T14:02:23.756Z | 2019-12-17T00:00:00.000 | {
"year": 2019,
"sha1": "8c433313d290a173de878c034a35c75120602115",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2019.00428/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c433313d290a173de878c034a35c75120602115",
"s2fieldsofstudy": [
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
119215495 | pes2o/s2orc | v3-fos-license | The maximum mass and radius of neutron stars and the nuclear symmetry energy
We calculate the equation of state of neutron matter with realistic two- and three-nucleon interactions using quantum Monte Carlo techniques, and illustrate that the short-range three-neutron interaction determines the correlation between neutron matter energy at nuclear saturation density and higher densities relevant to neutron stars. Our model also makes an experimentally testable prediction for the correlation between the nuclear symmetry energy and its density dependence -- determined solely by the strength of the short-range terms in the three neutron force. The same force provides a significant constraint on the maximum mass and radius of neutron stars.
Since their discovery, neutron stars have remained our sole laboratory to study matter at supra-nuclear density and relatively low temperature. The equation of state (EoS) of matter at these densities is largely unknown but uniquely determines the structure of neutron stars and the relation between their mass (M ) and radius (R). Matter that can support large pressure for a given energy density (typically called a stiff EoS) will favor large neutron star radii for a given mass. Such an EoS also predicts large values for the maximum mass of a neutron star that is stable with respect to gravitational collapse to a black hole. Conversely, a high density phase that predicts a smaller pressure will result in more compact neutron stars and smaller maximum masses.
The recent accurate measurement of a large neutron star mass M = 1.97 ± 0.04M solar in the system J1614-2230 provides strong evidence that the high density equation of state is stiff [1]. Interestingly, attempts to infer neutron star radii have favored relatively small values ranging from 9 to 12 km [2][3][4]. Although the radius inference depends on specific model assumptions, these smaller radii imply a soft EoS in the vicinity of nuclear saturation density. Taken together, they indicate that the EoS of dense matter makes a transition from soft to stiff at supra-nuclear density. In this Rapid Communication we show that the three-neutron force (3n) is the key microscopic ingredient that determines the nature of this transition.
The importance of three-body forces in nuclear physics is well known, and quantum Monte Carlo (QMC) calculations of light nuclei have clarified its structure and strength. However, in these systems the dominant threebody force acts between two neutrons and proton or between two protons and a neutron. While the force among three neutrons is important in light neutron-rich nuclei, the short distance behavior is not easily accessible [5]. Properties of large neutron-rich nuclei are potentially sensitive to this interaction, especially if the symmetry energy provides a reliable measure of the energy difference between pure neutron matter and symmetric nuclear matter at saturation density. There has been much recent progress in both theory and experiments to measure the symmetry energy and its density dependence, as reviewed in Refs. [6,7]. The symmetry energy is expected to be in the range 32 ± 2 MeV. We explore this experimentally suggested range for the nuclear symmetry energy and show that a more precise determination is needed to adequately constrain the 3n interaction.
In this work we solve the non-perturbative many-body nuclear Hamiltonian using the auxiliary field diffusion Monte Carlo (AFDMC) [8] method. Its accuracy in studying nuclear systems has been tested in light nuclei [9]. The extension to include three-body forces in pure neutron rich systems is straightforward with no additional approximations within the AFDMC technique [10], and a comparison with the Green's function Monte Carlo (GFMC) has been extensively tested in neutron drops [11]. We present results for the EoS of neutron matter using phenomenological two-neutron (2n) potentials, which provide an accurate description of nucleonnucleon scattering data up to high energies, and study the role of the poorly constrained 3n interaction.
In earlier work it has been established that the EoS in the density regime (1 − 3)ρ 0 plays an essential role in determining the neutron star radius [12]. In this density regime, the 3n interaction plays a critical role because of a large cancellation between the attractive and repulsive parts of the 2n interaction arising from the long and short distance behavior, respectively. Consequently, we find that the neutron star radius for a canonical mass of 1.4 M solar is especially sensitive to the 3n interaction. Although matter in the neutron star will contain a small admixture of protons, here we calculate the EoS of pure neutron matter for the following reasons. First, the structure of the interactions between neutrons is simpler than those between neutron and protons. Second, these simpler interactions are amenable to QMC methods to solve the many-body problem as it is devoid of the complexities of the isospin dependent spin-orbit and three-nucleon potentials, and clustering effects likely in systems with protons. Third, the fraction of protons required to ensure stability is small and is typically less than 10%. Finally, since generically neutron matter has higher pressure than matter containing any fraction of protons or strangeness in the form of hyperons or kaons, our results provide stringent upper bounds on the neutron maximum mass arXiv:1101.1921v2 [nucl-th] 14 Mar 2012 and radius.
To compute the EoS for neutron stars it is necessary to describe the nucleon-nucleon interactions at short distances or large relative momenta up to p 2p F n 660 MeV(ρ/ρ 0 ) 1/3 , where p F n is the Fermi momentum, ρ is the typical density in the neutron star core, and ρ 0 = 0.16 fm −3 is the nuclear saturation density. Relative momenta up to p F n are required in even a mean-field (Fermi gas) description, and the nn interaction scatters nucleons to larger momenta up to order (1.5-2)p F n at saturation density. Descriptions of higher density neutron matter with softer interactions if they are consistently evolved to lower scales, must include 3n (and potentially 4n) interactions.
Phenomenological two nucleon potentials such as the Argonne potential have been constructed to describe scattering data up to relative momenta 600 MeV with high accuracy [13]. Despite the fact that the Argonne potential has been fit up to laboratory energies of 350 MeV, it very well reproduces scattering data up to much larger energies [14] The AV8' interaction we employ in this study is identical to the full AV18 interaction in s and p waves, and includes the dominant one-pion interaction in higher partial waves. Chiral interactions also reproduce the scattering data very well below 350 MeV laboratory energy, but they fail rapidly above because of the cutoff in presently available interactions. At larger momentum transfer, the potentials cannot describe inelasticities, but in scattering channels where inelasticities are known to be small they have been shown to provide a good description. They also provide good predictions [15] of highmomentum components of nuclear wave functions as observed in nucleon [16,17] and electron scattering [18,19]. These high momentum observables provide a test of the assumed short-distance features. In the low-energy highmomentum region relevant to neutron stars the inelasticities in 2n scattering must be absorbed into many-body forces (3n, 4n, . . .) intimately connected to the shortdistance behavior of the 2n interaction.
The nuclear Hamiltonians we consider contain the non relativistic kinetic energy, and the 2n and 3n interactions: For the 2n potential, we use the Argonne AV8' model [20] and the form of the 3n interaction is inspired by both the Urbana IX and the Illinois models [5]. We consider a range of 3n interactions that contain long-distance s and p wave 2π exchange contributions, an intermediaterange (3π loops) contribution, and a spin-independent short-range repulsive term. Explicitly, (2) This form of interaction includes all the terms present in low order chiral interaction, plus selected terms found to be important in studies of light nuclei and nuclear matter using the Argonne interactions.
The structure of the operators O appearing above are defined in Ref. [5]. The relative contributions of these four components of the 3n force depends on the 2n interaction. We find that for the Argonne potential, the 2n interactions suppress the long-distance (2π) contribution of the 3n force in the ground state. This suppression is a result of the pion-range correlations induced by the 2n force, we find it also occurs for the super-soft core N N interaction [21]. For typical ranges of values of the strength parameters A P W 2π and A SW 2π considered in Ref. [5] we find the contribution of these operators to the ground state energy is repulsive but very small at all densities studied. In contrast, this interaction is large and attractive in light nuclei where both neutrons and protons contribute. The intermediate-range (3π) 3n interaction was introduced to fit the properties of weakly bound neutron-rich nuclei such as 8 He [5]. Earlier calculations [10] have shown that this interaction is strong and attractive in neutron matter for typical values of A 3π quoted in Ref. [5]. In this work, we explored a range of values for A 3π from zero to that in the Illinois-7 3n interaction [22] because the structure of this term is still not fully understood or constrained. We use a phenomenological short-range repulsive term as in the Urbana and Illinois three-body forces, with [5]. We have also considered a different form V R µ = A R cyc v(r ij )v(r jk ) with and v(r) = exp(−2µr); other different forms of V R have been explored, giving very similar results.
The 3n interaction we employ is not intended to be a microscopic treatment of the complete 3n interaction. It assumes that for the neutron matter equation of state the effects of more complicated spin-dependent shortdistance 3n interactions, relativistic effects, and potential 4n interactions can be mimicked with simplified threeneutron interactions with a wide range of spatial dependence. This assumption has been tested in the case of relativistic corrections, where in Ref. [23] it was found that the density dependence of the relativistic effects is similar to that of the 3n interaction. Further tests of the density dependence of specific higher-order terms in the chiral interaction are valuable. The different forms of V R we have explored span a wide range of density dependence for the 3n interaction, as shown below.
For the 3n interaction we vary both A 3π and µ to study the sensitivity to short-range physics. The strength of the short-range 3n interaction A R is taken to be a free parameter adjusted to yield the experimentally accessible nuclear symmetry energy. Although not proven, we make the following reasonable assumptions: (1) relativistic effects in neutron matter show a similar density dependence to the short-range three-nucleon interaction as carefully studied in Ref. [23], (2) the density dependence of additional spin-dependent short-range 3n interactions (for example, higher-order terms in chiral expansions) in the equation of state of neutron matter can be described in a spin-independent model, and (3) dence are suppressed relative to the 3n force for densities up to (2 − 3)ρ 0 . This last assumption can be justified at nuclear density by the high-precision fits to light-nuclei obtained with only 3n forces [24]; at higher density this model assumption can be tested by its predicted correlation between properties of neutron-rich nuclei and neutron stars.
We assume that E sym = E neutron (ρ 0 ) − E nuclear (ρ 0 ) and using experimental values of E sym = 32 ± 2 MeV [25] and E nuclear (ρ 0 ) = −16.0 ± 0.1 MeV from nuclear masses models [26] we obtain an empirical constraint for neutron matter energy E neutron (ρ 0 ) = 16 ± 2 MeV. Potential higher-order corrections to the quadratic nuclear symmetry energy, for which there is some theoretical motivation but no clear experimental evidence, may affect the extraction of the neutron matter energy and increase the associated error. In this work we ignore these poorly known corrections and tune A R to reproduce the neutron matter energy in the range 16 ± 2 MeV. Our results are shown in Fig. 1, where the green and blue points are QMC results for different choices of A R corresponding to E neutron (ρ 0 ) = 16 MeV ( E sym = 32 MeV) and E neutron (ρ 0 ) = 17.7 MeV ( E sym = 33.7 MeV), respectively. The results are compared to those obtained using a 2n force without 3n (E sym = 30.5 MeV), and 2n combined with the Urbana IX 3n (E sym = 35.1 MeV). The bands depict the sensitivity to short-distance spin and spatial structure of the 3n interaction and are obtained by varying the range of the 3n short-distance force and A 3π .
In the vicinity of nuclear density, E neutron (ρ) = E neutron (ρ 0 ) + L/3 (ρ − ρ 0 )/ρ 0 where L is related to the derivative of the nuclear symmetry energy. The inset in Fig. 1 shows the correlation between E sym and L. This correlation is insensitive to the large variations in the range of the short-range 3n force µ and the strength of the 3π term A 3π . This is in sharp contrast to the predictions of mean field theories where the slope was found to be very sensitive to the choice of effective interactions [27]. Previous calculations of neutron matter up to ρ 0 [28] use a chiral 2n interaction fit to laboratory energies of 350 MeV plus the two-pion exchange threenucleon interaction to calculate the neutron matter equation of state using perturbation theory. In contrast to our results, a significant repulsion from the 2π exchange long-range 3n interaction was found. Since this force is better constrained by light nuclei, these earlier calculations can make a prediction for the neutron matter energy independent of the phenomenological short-range interaction, which plays an important role in our calculation. To understand this basic difference, further tests of the convergence of perturbation theory and the chiral expansion in the diagrammatic calculations, a survey of other two-body interactions in the AFDMC, and the incorporation of chiral interactions in non-perturbative methods such as lattice and suitable extension of QMC would be necessary.
Current determinations of L have relied on analysis of neutron-skins, surface contributions to the symmetry energy of neutron-rich nuclei, and isospin diffusion in heavy-ion reactions. These studies have been useful, but not very constraining as acceptable values are in the range L = 40 − 100 MeV [25]. However, a better determination of L even with modest reduction in the error would test our model for 2n and 3n interactions.
The predictions of QMC can be accurately fit using where the coefficients a and α are sensitive to the low density behavior of the EoS, while b and β are sensitive to the high density physics [29]. We find that the 3n force plays a key role in determining the coefficient b and the variation of the other EoS parameters is comparatively small. Numerical values for these parameters are reported in Table I To calculate the mass and radius of neutron stars we solve the Tolman-Oppenheimer-Volkoff (TOV) equations for the hydrostatic structure of a spherical non rotating star using the QMC equation of state for neutron matter [30,31]. The QMC EoS we use is for ρ ≥ ρ crust = 0.08 fm −3 . Below this density we use the EoS of the crust obtained in earlier works in Refs. [32] and [33].
The neutron star mass-radius predictions are obtained by varying the 3n force and are shown in Fig. 2. The striking feature is the estimated error in the neutron star radius with a canonical mass of 1.4 M solar . The uncertainty in the measured symmetry energy of ±2 MeV leads to an uncertainty of about 3 km for the radius, while the uncertainties in the short-distance structure of the 3n force predicts a radius uncertainty of < ∼ 1 km. The different bands of Fig. 2 The central density of stars with M > ∼ 1.5M solar are larger than 3ρ 0 . At these higher densities, effects such as relativistic corrections to the kinetic energy, retardation in the potential, and four-and higher body forces become important. Consequently, non-relativistic models violate causality and predict a sound speed c s = ∂p/∂ > ∼ c for ρ (4 − 5)ρ 0 . To overcome this deficiency we adopt the strategy suggested in Ref. [34] and replace the EoS above a critical density ρ c by the maximally stiff or causal EoS given by p( ) = c 2 − c , where p is the pressure, is the energy density, c is the speed of light and c is a constant. This EoS is maximally stiff and predicts the most rapid increase of pressure with energy density without violating causality. The constant c is the parameter that determines the discontinuity in energy density between the low-and high-density equations of state. Our choice of c ensures that the energy density is continuous and provides an upper bound on both the radius and the maximum mass of the neutron star. Figure 3 shows how the bounds on the maximum radius and mass of the neutron star vary with our choice of the critical density ρ c . It also illustrates that the bounds provide useful constraints only when the EoS is known up to (2 − 3)ρ 0 . In Ref. [35] bounds on the radius were derived by using an EoS of neutron matter calculated up to ρ 0 with specific assumptions about polytropic equations of state at higher densities. Our upper bounds are model independent and show that the radius of a 1.4M solar neutron star can be as large as 16 km if ρ c = ρ 0 . To obtain a tighter bound the equation of state between 1ρ 0 and 2ρ 0 is important. The red, green, blue and black curves are predictions corresponding to the 3n interaction strength fit to E sym = 30.5, 32.0, 33.7 and 35.1 MeV, respectively. We also note that these bounds do not change much for ρ c > ∼ 4ρ 0 because the QMC EoS is already close to being maximally stiff in this region. These upper bounds provide a direct relation between the experimentally measurable nuclear symmetry energy and the maximum possible mass and radius of neutron stars.
To summarize, we predict that the correlation between the symmetry energy and its derivative at nuclear density is nearly independent of the detailed short-range 3n force once its strength is tuned to give a particular value of E sym . Consequently, in our model one short-distance parameter A R completely determines the behavior of the EoS. At higher density, the sensitivity to short-distance behavior of the 3n interaction translates to an uncertainty of about 1 km for the neutron star radius with mass M = 1.4M solar . The uncertainty at high density due to a poorly constrained symmetry energy is larger, 3 km. Within our model we predict that neutron star radii are in the 10 − 13 km range for nuclear symmetry energy in the range 32 − 34 MeV. If nuclear experiments can determine that E sym ≤ 32 MeV, QMC predicts that L < ∼ 45 MeV at nuclear density, and for neutron stars it predicts M max < 2.2M solar and R < 12 km for a neutron star with M = 1.4M solar . The relationship between the symmetry energy and its density dependence is experimentally relevant, and its implications on the neutron star mass radius relationship are subject to clear observational tests. | 2012-03-14T16:07:46.000Z | 2011-01-10T00:00:00.000 | {
"year": 2011,
"sha1": "924c584240c825cb4289dbead251055bb8144d58",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevC.85.032801",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "3208ef5e6f974017be31a15d97b980bd4f47189d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
250582971 | pes2o/s2orc | v3-fos-license | Providing Low-barrier Addiction Treatment Via a Telemedicine Consultation Service During the COVID-19 Pandemic in Los Angeles, County: An Assessment 1 Year Later
Background Los Angeles County Department of Health Services provides medical care to a diverse group of patients residing in underresourced communities. To improve patients' access to addiction medications during the COVID-19 pandemic, Los Angeles County Department of Health Services established a low-barrier telephone service for DHS providers in March 2020, staffed by DATA-2000–waivered providers experienced with prescribing addiction medications. This study describes the patient population and medications prescribed through this service during its initial 12 months. Methods We performed a retrospective evaluation of a provider-entered call registry for the telephone consult line. Information was collected between March 31, 2020, and March 30, 2021. The registry includes information related to patient demographics, the reason for visit, and which addiction medications were prescribed. We conducted descriptive statistics in each of these domains. Results During the study period, 11 providers on the MAT telephone service logged 713 calls. These calls represented a total of 557 unique patients (mean age of 40 years, 75% male, 41% Latino, 49% experiencing homelessness). Most patients either had Medicaid insurance (77%) or were uninsured (20%). The most prescribed addiction medication was buprenorphine-naloxone (90%), followed by nicotine replacement therapy (5.3%), naltrexone (4.2%), and buprenorphine monotherapy (1.8%). Conclusion A telephone addiction medication service is feasible to deliver low-barrier medications to treat addiction in underresourced communities, especially to individuals experiencing homelessness. This can mitigate but does not eliminate disparities in access to addiction medications for communities of color.
S ubstance use increased and access to substance use disorder (SUD) treatment was disrupted during the COVID-19 pandemic, particularly for people experiencing homelessness and individuals in underresourced communities. 1,2 National agencies established emergency policies to facilitate access to telemedicine services for patients with SUDs, and preliminary studies support that the pivot from in-person to virtual services has been successful. [3][4][5][6] Los Angeles County is home to 10 million residents, 66,436 of whom experienced homelessness in 2020. 7 The Los Angeles County Department of Health Services (LAC DHS) is the United States' second largest municipal health system, which operates an extensive network of public hospitals and clinics in LAC and serves approximately 500,000 patients annually. During the COVID-19 pandemic, LAC DHS established a telephone consultation service for DHS providers in March 2020 to facilitate low-barrier access to addiction medications for DHS patients. who, seeing the gap in treatment access, agreed to participate. The service was available to all staff within DHS-affiliated hospitals, clinics, correctional services, and contracted programs and outreach workers to call when they encounter a patient appropriate for addiction medication. Health care providers working in any DHS-affiliated care setting were encouraged to call the telephone line to connect their patients (whether housed or unhoused) with a telephonic evaluation for addiction medication. Services were advertised to DHS directly operated and contracted programs via screensavers, e-mailed fliers, and presentations at virtual staff meetings.
The consultation service was not made available to current or prospective patients to call directly. A field engagement specialist and team delivered a brief training program for 150 community health workers about addiction medications and how to access the line. Local pharmacies near high-density areas of opioid overdose were identified and given education on dispensing addiction medications to our target population.
The aim of this study was to describe the first 12 months of the addiction medication telephone consultation service including characterizing which calls were received, which patients were treated, and which medications were prescribed.
Data Collection/Analysis
Registry information was collected from March 31, 2020, to March 30, 2021. After each telephone visit, providers were directed to log patient information including name, date of birth, housing status, and reason for call into a secure registry. These data were fact checked by a coinvestigator (JSG) who crossreferenced the inputted information with demographic data previously inputted into the electronic medical record (EMR). Additional information including age, race/ethnicity, and insurance status was also extracted from the EMR.
Information on which medications were prescribed was collected from the EMR during the study time period. All Federal Drug Administration-approved addiction medications delivered from a physician office for opioid use disorder (OUD) (buprenorphine, buprenorphine-naloxone, naltrexone), alcohol use disorder (naltrexone, acamprosate, disulfiram, and off-label topiramate and gabapentin), and tobacco use disorder (nicotine patches, gum, and lozenges; varenicline; and bupropion) were included. We excluded patients who received only nicotine replacement therapy and no other addiction medication. We conducted an analysis using descriptive statistics to help characterize the patient population.
Institutional Review Board
This study was deemed exempt by the LAC DHS institutional review board.
RESULTS
There were 713 calls logged in the addiction medication telephone consultation registry by 11 providers during the study period. These calls represented a total of 557 patients (mean age of 40 years, 75% male, 41% Latino, and 49% experiencing homelessness) (
DISCUSSION
In this retrospective study of a telephone consultation service for low-barrier addiction medications, we found that the intervention served a high volume of patients, approximately half of which were experiencing homelessness. Patients who accessed care were more often younger, male, and White, compared with the general DHS population.
Low-barrier addiction treatment interventions and telemedicine initiatives have become catalysts for improving access to addiction treatment during the COVID-19 pandemic. 8,9 More than one-third of clinicians have reported starting buprenorphine for patients with OUD without an in-person examination. 8 Relaxed guidelines on prescribing have allowed for innovation in telemedicine addiction services, which have demonstrated feasibility and favorable treatment outcomes, [3][4][5]9 although previous interventions most often describe single-site pilot programs with significantly fewer patients served than via our service.
We were initially surprised by the number of patients experiencing homelessness who accessed our consultation service. Although most prior addiction medication interventions for patients experiencing homelessness have relied on either mobile van services or on providing their patients with cell phones, 7,8 this population has less access to stable phone numbers with operational data plans, especially with unlimited minutes. 10 To our knowledge, this is among the first interventions utilizing community health workers to enter encampments and interface directly with patients linking them with providers telephonically. In addition, this is one of the first broadly implemented interventions to serve every DHS site and contracted program, spanning the full continuum of care from acute care hospitals and ambulatory clinics to street-based outreach teams. Interestingly, we noted very low rates of medication prescription for medications other than buprenorphine-naloxone and naltrexone. This was in large part because most patients served had OUD. Like prior studies, we noted disparities in access to care among racial/ethnic minorities. [11][12][13] Even with a low-threshold addiction treatment approach within a safety net institution, we had a largely White population, compared with Los Angeles County as a whole. This highlights that telephone delivery of addiction medication does not erase the disparities in addiction treatment among Black and Latino populations. 11,12 There are many structural factors shaping the availability of addiction medication in communities in the United States, and the demand for and acceptability of addiction medication are shaped by factors that may include a lack of Black and Latino providers offering these services as well as a lack of culturally specific and responsive outreach, education, and treatment initiatives. 12 Our study has limitations. Not every provider consistently utilized the call-line registry so we may have missed encounters that were not captured using standard processes. Similarly, not all prescriptions were documented through the standard process. Certain medications, such as varenicline and naltrexone, have multiple use profiles, and we were not able to differentiate reason for use. Our comparison group included all DHS patients, not only those with SUD, and thus, we were not able to draw firm demographic conclusions. Finally, we did not have a mechanism to assess data on individual providers, medication receipt, or continuity of ongoing care.
In summary, an addiction medication telephone consultation service can be a feasible approach to deliver low-barrier treatment to high-risk patients, including those experiencing homelessness, during the COVID-19 pandemic. More research is needed to improve access to addiction treatment for Black and Latino communities. | 2022-07-17T06:21:37.718Z | 2022-07-15T00:00:00.000 | {
"year": 2022,
"sha1": "9ca607009e70b69415718f1db1f866c408313a5b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "WoltersKluwer",
"pdf_hash": "d7d624fe05f9c402f5d9cc364181ce8940170435",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263701299 | pes2o/s2orc | v3-fos-license | Recent advances in the chemistry and applications of fluorinated metal–organic frameworks (F-MOFs)
Metal–organic frameworks are a class of porous crystalline materials based on the ordered connection of metal centers or metal clusters by organic linkers with comprehensive functionalities. The interest in these materials is rapidly moving towards their application in industry and real life. In this context, cheap and sustainable synthetic strategies of MOFs with tailored structures and functions are nowadays a topic widely studied from different points of view. In this review, fluorinated MOFs (F-MOFs) and their applications are investigated. The principal aim is to provide an overview of the structural features and the main application of MOFs containing fluorine atoms both as anionic units or as coordinating elements of more complex inorganic units and, therefore, directly linked to the structural metals or as part of fluorinated linkers used in the synthesis of MOFs. Herein we present a review of F-MOFs reported in the recent literature compared to benchmark compounds published over the last 10 years. The compounds are discussed in terms of their structure and properties according to the aforementioned classification, with an insight into the different chemical nature of the bonds. The application fields of F-MOFs, especially in sustainability related issues, such as harmful gas sorption and separation, will also be discussed. F-MOFs are compounds containing fluorine atoms in their framework and they can be based on: (a) fluorinated metallic or semi-metallic anionic clusters or: (b) fluorinated organic linkers or (c) eventually containing both the building blocks. The nature of a covalent C–F bond in terms of length, charge separation and dipole moment sensibly differs from that of a partly ionic M–F (M = metal) one so that the two classes of materials (points a and b) have different properties and they find various application fields. The study shows how the insertion of polar M–F and C–F bonds in the MOF structure may confer several advantages in terms of interaction with gaseous molecules and the compounds can find application in gas sorption and separation. In addition, hydrophobicity tends to increase compared to non-fluorinated analogues, resulting in an overall improvement in moisture stability.
Introduction
The UN's 2030 Agenda for Sustainable Development together with the Paris Agreement on Climate Change proposed several urgent goals for the member states to achieve over the next decade, including a dedicated goal on energy, SDG 7, which calls to "ensure access to affordable, reliable, sustainable and modern energy for all".Providing all people with access to affordable and sustainable energy will open up a new world of opportunity.This can lead to increased economic opportunities and jobs, empowerment of the most fragile parts of the population, provide better education and health care for more sustainable, equitable communities and better and more resilient protection against climate change. 1 In this context, the green transition from fossil fuels to renewable sources and the reduction of greenhouse gases emissions are two of the main issues that require meaningful technological development in order to full the sustainability criteria.The chemistry of Metal-Organic Frameworks (MOFs), a class of porous crystalline materials constituted by the ordered connection of metal clusters and organic linkers, has seen a tremendous development in the last few years as their versatility allows them to be rationally designed for specic purposes targeting many of the goals of the sustainable development agenda. 2 The search for environmentally friendly synthesis processes, industrial scalability and high recyclability are the main issues driving scientic interest in these materials.The most important applications of MOFs which have already reached high technological readiness levels (TRLs) 3 concern carbon dioxide capture 4,5 and storage (CCS) from the atmosphere (direct air capture, DAC) [6][7][8] and the ue gases industrial emissions, 9 biogas purication 10,11 and separation, 12,13 water harvesting, 14,15 heterogeneous (photo)catalysis, 16,17 batteries 18,19 and electrochemistry. 20,21In particular, the highest TRL levels (up to 6) have been achieved for water harvesting applications 22 and CCS, 23 while research for other applications is still at a lower technological level.Tailored porosity and high stability are the two key features required for such applications.5][26] In addition, uorocarbon chains are oen highly hydrophobic and not subjected to oxidation. 27These characteristics render the (per)uorinated materials exceptionally stable and useful for targeted applications in harsh conditions. 28Fluorination of MOFs can be based on (a) using uorinated inorganic units based on metallic or semi-metallic clusters or (b) using uorinated linkers.Ultimately, F-MOFs could contain both the inorganic and the organic uorinated building blocks but, to the best of our knowledge, no such examples have been reported to date.4][35][36] Considering the aforementioned approaches, F atoms are expected to decorate the pores of the MOFs resulting in peculiar properties towards the absorption of carbon dioxide, water and other molecules such as light hydrocarbons.The improved polarization of the uorinedecorated cavities may increase the isosteric heat of adsorption (Q st ) of partially or locally polarized molecules.As an example, CO 2 interacts preferentially with uorine due to the electrophilic C atoms bonded to oxygen atoms, resulting in an increased absorption selectivity.This effect could lead to a purely physisorbed mechanism at relatively high temperatures (273 to 323 K range), rendering these materials very useful for outstanding applications such as DAC (absorption at 400 ppm) or CO 2 capture from industrial ue gases with moderate CO 2 content (7 to 15% wt).This review aims to report and discuss the recent developments of the chemistry of F-MOFs in terms of synthetic strategies and their main structural features.The key points of the present work can be summarized as follows: (i) uorinated inorganic anions can be effectively used as building blocks for the synthesis of supermicroporous MOFs.(ii) Fluorine can also partially or fully substitute OH groups in hydroxylated polynuclear clusters as secondary building blocks of known MOFs.(iii) Fluorination of MOFs can be accomplished through the use of peruorinated linkers, providing the access to novel structures with unusual properties.(iv) The presence of uorine strongly increases the affinity towards molecules of high concern, such as CO 2 resulting in more efficient separation features compared to non-uorinated analogues.According to the last point, the recent applications in the eld of capture and separation of CO 2 (ref.37) and of other gases (alkanes/ alkenes, SO 2 , H 2 S, CH 4 , R 22 etc.) 28,31 hydrophobicity 38 and dehydration of natural gas 39 will also be discussed.
F-MOFs based on metallic-or semi-metallic-uorinated anions
MOFs based on uorinated anions are compounds in which the F atoms are directly bonded to an inorganic structural unit of the framework different from the organic linker.For instance, F can be present as uoride and replace the terminal hydroxyl group in hydrated polynuclear clusters or it can coordinate single metallic or semi-metallic elements generally forming anionic moieties, such as SiF 6 The MOFs of general formula M ′ FSIX-Ni-pyridine MOF series were built by changing the hexauorinated metal ions having the same octahedral coordination M ′ F 6 moieties (M ′ = Si, Ga, Zr, Ge, Sn, Ti, V, and Nb) but keeping nickel and pyrazine as divalent cation (M = Ni) and ligand, respectively.
Then the authors modelled other structure by changing the organic linker with 4,4 ′ -bipyridine (byp), 4,4 ′ -azopyridine (apy), 4,4 ′ -bipyridylacetylene (bpa), and 3,3 ′ -di(4-pyridyl)-1,2,4,5tetrazine (dpt).CO 2 capacity and selectivity were evaluated by theoretical gas adsorption isotherms using CO 2 /N 2 15/85 gas streams.Notably, the working capacity of the MOFs increased with increasing the ligand size but the selectivity obviously decreased.In order to keep the same supermicropore range observed in the parent MOFs authors also constructed interpenetrated MOFs with the longer ligands and they found that interpenetration had bene-cial effect in maintaining the overall absorption capacity and with high selectivity (over 2400) for the dpt-containing MOF. 40In 2016 the Eddaoudi group reported a new family of F-MOFs inspired on the same design of SIFSIX compound but they used a new uorinated anion based on Nb, namely the NbOF 5 2− , which displayed narrower pores respect to SIFSIX family. 38The reason is due to the longer Nb-O and Nb-F distances compared to Si-F (1.899 Å for Nb-F vs. 1.681Å for Si-F).This resulted in larger anionic octahedra pillaring the square grid thus reducing the pore size.Authors reported the Ni-pyrazine derivative containing the NbO 5 F 2− anion with the acronym NbOFFIVE-1-Ni and they also solved the X-ray structure containing CO 2 .NbOFFIVE-1-Ni at 296 K crystallised in the tetragonal space group I4/mcm with unit cell parameters a = b = 9.942(4) Å and c = 15.764(6)Å, its structure is represented in Fig. 1b.The ne tuning of the MOF pore size was found to be crucial for increasing the CO 2 interaction in terms of enthalpy of absorption.This part will be discussed in the section dedicated to the application of F-MOFs.The work was further expanded in the last two years were Al and Fe dianionic clusters, namely AlF , with M = Si, Ti, Zr, Ge and Sn as uorinated building unit was reported by Zarowotko and co-authors in 2016. 41Here a series of MOFs with the general formula [Cu 6 (Tripp) 8 ](-MF 6 ) 3 (MF 6 ) 3 $g (Tripp = 2,4,6-tris(4-pyridyl)pyridine; g = disordered guest molecules; M = Si, Ti, Ge, Zr, or Sn) were reported.The tripp linker designs cubic-octahedral cages similar to those observed in the HKUST-1 MOF but with an unusual 3,5-c topology with the anionic MF 6 moieties connecting the copper centres.The result is the formation of a truncated cubic octahedron connected by the uorinated moieties (Fig. 2).
In 2023, Zhang and co-authors reported on the templating effect of GeF 6 2− embedded in the framework of a Cu tri(pyridin-4-yl)amine (TPA) MOF, namely ZNU-6.The formula of ZNU-6 is [Cu 6 (GeF 6 ) 6 (TPA) 8 ].The assembly of Cu 2+ ions and TPA produced a cationic framework counter balanced by equimolar amounts of GeF 6 anions with respect to copper ions resulting in a complex ith-d topological network in which icosahedral cages and 1D channels are present.Twelve 1D channels surround each cage and each interlaced channel connects four cages.The uorinated anions are placed at the each cage edge, forming a region densely decorated with uorine atoms.The MOF is permanently porous displaying with a BET surface area of over 1330 m 2 g −1 and a micropore volume of 0.55 cm 3 g −1 .The schematic structure of ZNU-6 and the effect of GeF .In 2021, Balkus and co-workers found an unusual mechanism in RE-MOFs where the use of 2-a as modulator induced the substitution of uoro-bridging groups in place of m 3 -OH groups.Authors reported the 2-a modulated syntheses of two Ho-MOFs by using benzenedicarboxylic acid (BDC) and 2,2 ′ -bipyridine-4,4 ′ -dicarboxylate (4,4 ′ -BPDC). 44he MOF containing BDC has UiO-66 structure and it is based on hexanuclear Ho 6 clusters bridged by the carboxylate groups in a fcu porous framework. 44The formula of these clusters is Ho 6 (OH) 4 F 4 (L) 12 and the uorine atoms occupy the same m 3 bridged positions generally host by OH groups.The BPDC derivative is based on trinuclear clusters forming 1D zigzag chains "ladders".The crystal structure of Ho-UiO-66 type MOFs is reported in Fig. 4.
The rungs of the ladder consist of the Ho-F bonds aligned along the c-axis.These chains are connected through the carboxylate groups of BPDC, along both the a and c directions.The structure of the two MOFs and the details of the inorganic building units are shown in Fig. 3 and 4. The presence and the amount of the F atoms on the inorganic cluster was studied in detail by the authors with X-ray photoelectron spectroscopy and solid-state 19 F-MAS NMR.
The mechanisms of uorination proposed involved C-F activation and uorine extraction mediated by Ho through two different mechanisms: single electron transfer (SET) or by uorine transfer and benzyne formation.
Interestingly, authors claimed that similar mechanism could involve the uorination of other MOFs when the same uorinated modulators were used although the way to verify it is not straightforward.A theoretical DFT study was recently reported by Prasetyo and Pambudi about the effect of uorination on the hexanuclear cluster of Zr-based MOF with MOF-801 structure (fumarate acid as linker). 45Authors found that the substitution of 1 to 4 F atoms in place of OH groups on the cluster, sensibly affects the cell dimension and the Zr-F distance were found to be longer (2.25 Å vs. 2.19 Å) for Zr-O.Authors also calculated the binding energies for H 2 adsorption as function of number of F atoms substituted and they found an average increase of −5 kcal mol −1 resulting in a better affinity of the F-MOF compared to the pristine one.Similarly, the role of F atoms placed on the SBU of Mg MOF-74 was investigated in a recent theoretical paper by Nguyen and co-authors.Authors have employed ab initio molecular dynamics simulations to check the role of capping F atoms on the unsaturated metal centres of the MOF and they found that polarization of F centres on the metal sites increased the H 2 affinity in term of heat of adsorption up to 3.9 kJ mol −1 . 463 F-MOFs based on uorinated linkers MOFs based on uorinated linkers have been most developed respect to those based on inorganic uorinated units thanks to the commercial availability of many uorinated linkers and to the possibility of predicting the desired structural type in comparison with the non-uorinated analogues.Scheme 1 reports the molecular structure of many of the linkers used in recent years, mainly based on aromatic groups containing uorine or based on peruorinated aliphatic chains.4,4 ′ -(Hex-auoroisopropylidene)bis(benzoic acid) (H 2 FBBA) is among the most used uorinated linkers reported to date.
The use of H 2 FBBA was rst reported in 2011 by Banerjee and co-workers by using Cu(II) and several nitrogenated chelating co-ligands as reported in Scheme 2. 47 The syntheses afforded ve F-MOFs with various structures and different dimensionality.DMF or water were used as solvent and the temperature of reaction ranged from 358 to 393 K. 38 The structures are composed of paddlewheel Zn 2 (COO) 4 connected by the monoprotonated HFBBA ligand to form square grids connected in the third dimension by the pillaring 4-bpdh or 4-bpdb linkers.Topological analysis revealed a rare point symbol (4 4 $6 10 $8) observed only in a limited number of compounds all based on the coexistence of bent and linear The peruorinated moieties of the H 2 FBBA linker faced each other along the squared channels by designing a region rich in uorine atoms which rendered these compounds highly hydrophobic and useful for heterogeneous catalysis in aqueous environment, namely for Knoevenagel condensation reaction.
Very recently, rational design of H 2 FBBA and Zn allowed the obtainment of an ultramicroporous FMOF with peruorinated 1D channels with the formula ZnFBBA.The structure was already reported by Monge et al. in 2005.The structure is composed of innite chains of corner-sharing ZnO 4 tetrahedra which form a typical paddle-wheel secondary building units.These SBUs are connected through FBBA linkers designing two parallel one-dimensional (1D) channels in which the peruorinated methyl groups are placed in a helical conformation along the c-axis.These channels are therefore strongly decorated by uorine atoms resulting in a small average size of 5.2 Å.The structure of ZnFBBA and the details of the 1D ultramicroporous channels are shown in Fig. 6. 48his compound displays an unusual sorption behavior towards C2 hydrocarbons, exhibiting a reverse order of selectivity among acetylene, ethylene and ethane, as further discussed in the paragraph dedicated to applications.
The MOF featured ordered 1D channels designed by Cu 2 paddlewheel linked each other by a (FBBA) deprotonated and one protonated (H 2 FBBA) linker.
The bent carboxylic linkers expanded the framework in the third dimension thus designing a pillared 3D network based on the connection of a 2D square grid trough the H 2 FBBA linker.In Scheme 1 Molecular structures of the fluorinated linkers discussed in this review.
Scheme 2 Reaction conditions for the syntheses of F-MOF-x from the work of Banerjee and co-workers. 47g. 5 Structure of the [Zn 2 (HFBBA) 2 (4-bpdh)]$0.5DMF,extended view along the 120 hkl direction.Color code: zinc light blue, carbon grey, nitrogen blue, fluorine green, oxygen red. 48his MOF authors evidenced a bimodal distribution of small cavities of size 9.4 × 9.2 Å 2 and 5.6 × 4.2 Å 2 respectively and a third hidden one 8.8 × 4.7 Å 2 connected to the two tubular cavities by a 1D narrow channel consisting of uorinated windows with a 2.5 Å wide opening.
Depending on the temperature, these two apertures show a gate-opening effect and the cavities get successively accessible for hydrogen with increasing temperature.The MOF was also employed for the separation of deuterium from H 2 /D 2 mixtures. 49A similar linker to H 2 FBBA, namely 2,2 ′ -bis(tri-uoromethyl)[1,1 ′ -biphenyl]-4,4 ′ -dicarboxylate hereaer H 2 BPTFM, was recently employed by Chen et al. for the construction of a copper(II) based MOF with the formula [Cu 3 (BPTFM) 2 guest]. 50he compound, named by the authors as LIFM-100 in the original paper, is a 3D coordination polymer which displays a narrow-pore large-pore phase transition when evacuated from the solvent guest molecules.The structure of LIFM-100 np and lp phases is shown in Fig. 7. 51 The structure is composed of 1D chains constituted of Cu-O SBU with three different coordination environments and two types of carboxylate groups.All the ligands and Cu-O chains formed one type of double-walled tetragonal channel with 9 Å diameter and a surface decorated with F atoms, similarly to that observed with the triuoromethyl groups of the H 2 FBBA linker.This generates ahydrophobic region within the channel.The lp and the np phases can be interchanged by crystal-to-crystal phase transformation if the lp phase is previously evacuated and then soaked in different organic solvents.The ipping of one of the free oxygen atoms of a non-coordinated carboxylic group determines the transformation driven by a change in the coordination environment of one copper atom, which becomes ve-coordinated in square pyramidal mode.
The presence of small channels decorated by -CF 3 groups is analogue to that observed for F-MOFs based on H 2 FBBA, as described above, and the two MOFs display interesting properties in separation of R22 gas from gas mixtures.
In a 2017 paper the same group prepared a family of multivariate MOFs starting from the LIFM-28 precursor.One of the reported MOFs, LIFM-86, was functionalized on both the pockets with two uorinated linkers, 2,2 ′ -bis(triuoromethyl)-4,4 ′ -biphenyldicarboxylate, namely H 2 BPTFMA and the 2 2 ,2 5 -diuoro[1 1 ,2 1 :2 4 ,3 1 -terphenyl]dicarboxylic acid as shown in Fig. 8. H 2 BPTFMA was also employed by Su and co-workers in 2017 for achieving uorinated functionality on a preformed Zrbased MOF, namely LIFM-28 via a post-synthetic variable spacer installation (PVSI) strategy.LIFM-28 contains an 8-connected Zr 6 clusters with four pairs of terminal water molecules, thus designing two different insertion sites (A and B).Site A is suitable for the insertion of short linker, based on biphenyldicarboxylate and its analogues whereas site B is suitable for the insertion of longer spacers based on the terphenyldicarboxylate and its analogues, as depicted in Fig. 8. 52 F-MOFs based on aromatic uorinated linkers have been widely developed in the last few years by using uorinated analogues of simple linkers normally used in MOF synthesis such as terephthalic (H 2 BDC) acid or trimesic acid (H 3 BTC). 49ne of the rst attempts of uorination of a known MOF was proposed in 2013 by Van 53 The MOFs were based on [M(III)O 4 (OH) 2 ] octahedra interconnected by uoro-terephthalate linkers to form onedimensional rhombic-shaped channels able to change their size when subjected to solvent removal or thermal stimuli thus changing from a narrow pore to large pore phases.Respect to the non-uorinated analogues the compounds showed an increased thermal stability and a better affinity towards CO 2 lowering the pressure of np to lp phase transition.The H 2 O sorption experiments were also carried out showing that the uorination imparted noticeable hydrophobicity to both of the partially uorinated compounds.F-H 2 BDC and 2,5 (2F)-H 2 BDC have also been employed in a very recent paper by Zhang and coauthors for the preparation of uorinated UiO-66 and used for iodine absorption from wet iodine vapours. 55 series of uorine substituted ortho-phthalic acids (H 2 OPA), namely 3-F-OPA, The channels have an opening size of up to 6 Å, which are decorated by F-atoms belonging to the linkers.The compounds exhibited different stability depending on the positions of the F atoms.In particular, TKL 104, the MOF with uorine in meta positions, was less stable upon activation than to the other MOFs with uorine in ortho position and the one containing the fully uorinated linker.The four MOFs were employed for the separation of ethane/ethylene as discussed in the section devoted to applications.56 Tetrauoro terephthalic acid (H 2 4FBDC) was employed in 2017 by our group for the synthesis of two Ce-MOFs with MIL140A and UiO-66 structure respectively.29 The two compounds could be obtained in water by slightly changing the synthetic conditions.Ce-F 4 MIL140A is the most interesting compound, for its peculiar CO 2 absorption properties.Its polyhedral representation is depicted in Fig. 9.
The structure of Ce-F 4 MIL140A is constituted by the connection of one-dimensional inorganic chains, composed of Ce(IV) ions with coordination 8, carboxylate groups belonging to the linker bridging two different Ce atoms and m 3 -O species, via the peruorinated aromatic rings of the linker.This structure possesses narrow triangular channel-like pores lined with the uorine atoms belonging to the linker.A water molecule coordinated to the Ce(IV) ions making H-bonding interaction with neighbouring oxygen atoms.When removed by thermal treatment, an unsaturated coordination site is created that it is crucial for the coordination of the adsorbed carbon dioxide molecules, inducing a strong affinity for this molecule.Notably, if the synthetic conditions are changed, the uorinated F 4 UiO-66 phase is formed.Kinetics of crystallization of the two phases and the best conditions for getting pure and well crystallized compounds were studied using in situ synchrotron radiation light in 2021. 57A recent study made use of several coupled techniques to fully understand the dynamic of CO 2 adsorption in term of structural changes of the MOF and of heat of adsorption.The position of adsorbed CO 2 molecule was determined at atomic scale by using synchrotron radiation high resolution powder diffraction and EXAFS.The proposed mechanism (Fig. 10) involved a concerted ring rotation and the presence of unsaturated metal site interacting with CO 2 oxygen atoms. 58n 2022, Wang and co-authors reported the synthesis of a hierarchically porous Al-MIL-53 containing H 2 -F 4 BDC using a monocarboxylic acid as modulator.The use of the modulator led to the formation of instable Al complexes replaced by the F 4 BDC groups thus forming the uorinated MOF. 59The direct synthesis of the peruorinated MIL-53 MOF based on H 2 -F 4 BDC was carried out by our group via a solvent-free synthetic route.
Review
RSC Advances succinate (TFS) as linker which also afforded another MOF with MOF-801 structure type (PF-MOF-2) in which both fumarate and TFS are included in the framework thanks to post synthetic modication (PSM).NMR analysis on the digested samples coupled with TGA and gas sorption indicated the amount of uorinated linkers incorporated in the framework.Very likely, the uorinated moieties were placed on the defective sites (missing clusters) of the MOFs. 30Very recently the molecular rotor H 2 -BCP-F 2 and its non-uorinated analogue were employed by Comotti and co-workers to build up two Al-MOF (Al-FTR and Al-FTR 2 ) with MIL-53 like structure. 61he structure is constituted of innite 1D corner-sharing AlO 4 (OH) 2 octahedra linked each other by the carboxylate groups of BCP-based ligands, which are arranged perpendicular to the propagation direction of the columns, thus designing regular rhombic shaped channels.Variable temperature synchrotron diffraction down to 4 K was used in order to mode the rotor disorder along the rotation axis and the distance shi between the rotors in both uorinated and non-uorinated cases.Laser-assisted hyperpolarized 129 Xe NMR coupled with PW-DFT calculations were also employed in order to probe the free space and the molecular interactions with the linker by changing the temperature.Calorimetric measurement evidenced a high Q st of interaction to CO 2 of the F-MOF (30 kJ mol −1 ) at 195 K, conrming the benecial effect of F towards CO 2 .The structures and the rotor disorder of the two MOFs are shown in Fig. 11. 61oncerning the use of longer linkers, it is worth to mention the 2013 paper by Popov and co-authors reporting the Cucatalyzed-cross-coupling reaction between 2,3,5,6-tetra-uorobenzonitrile and 4-iodo-2,3,5,6-tetrauoro benzonitrile to afford a octauoro-biphenyl cyano precursors.
The structure was then converted to 8F-BDCA (see Scheme 1) or in the bis-tetrazole analogue (8F-BTAZ) respectively, via acid hydrolysis or through azide reaction catalyzed by ZnCl. 50e two linkers were then employed for the synthesis of three MOF based on Cu and containing the two linkers.MOFF-1, as named by the authors, with the formula Cu(8F-BDCA)(MeOH) is two paddle-wheel secondary building units linked each other by the peruorinated linker designing a square grid lattice.
Adding another co-linker, namely 1,4-diazabicyclo[2.2.2] octane (DABCO) a new MOF (MOFF-2) with the formula Cu 2 (8F-BDCA) 2 (DABCO) is obtained.In this case the nitrogen is coordinated by the Cu atoms of the SBU thus resulting in a 3D pillared network.The use of the latter 8F-BTAZ linker afforded another MOF with the formula Cu(8F-BTAZ)(H 2 O), namely MOFF-3.The network is constituted of innite CuO 2 N 4 units in which the copper atoms are octahedrally coordinated by a bridging water molecule and the tetrazole linkers resulting in 3D MOF with rhombic 1D innite channels running along the caxis.The presence of highly stacked peruorinated linkers render these MOF extremely hydrophobic, as veried by contact angle measurements.The same group expanded this approach in 2015 by using 1,3,5-tris(2 ′ ,3 ′ ,5 ′ ,6 ′ -tetrauoro-4 ′ -cyanophenyl) benzene (12F-BTCN) as building block to afford the triscarboxylic (12F-BTCOOH) and the tris-tetrazolate linkers (12F-BTCTZA), used with Cu(II) to form two highly porous zeotype MOFs with the same structure. 60The structure of the two MOFs with the formula Cu 2 (12F-BTCOO) Each cylindrical cage is formed from six [Ni 3 (m 3 -O)] units, six carboxylate ligands and two ligands.Each trigonal bipyramidal cage is built from ve [Ni 3 (m 3 -O)] units, six dicarboxylate ligands and three ligands. 62he two MOFs have specic surface area as high as 2500 m 2 g −1 and they were used for the separation of light uorocarbon gases.
In 2022 Chen and co-authors reported on the structure and gas separation properties of a mixed linker F-MOF based on Ni, 3,3 ′ ,5,5-tetrakis(uoro)biphenyl-4,4-dicarboxylic acid (H 2 -FBPTDC) and 2,4,6-tri(4-pyridinyl)-1,3,5-triazine (tpt) (JXNU-12(F)). 63he structure is composed of an anionic [Ni 3 (m 3 -O)(TFBPDC) 3 (tpt)] 2− framework counterbalanced by (CH 3 ) 2 NH 2 + ions derived from the decomposition of DMF solvents.The framework could be depicted as a variant of MIL-88 structure but the presence of linker acted as pore partitioning agent.The use of the pore partition agent of resulting in the formation of cylindrical cages and trigonal bipyramidal cages.Each cylindrical cage is formed from six [Ni 3 (m 3 -O)] units, six dicarboxylate ligands and two ligands.Very recently Comotti and co-authors reported on a family of isostructural MOFs based on Fe(III) and bis-pyrazolate linkers with different uorination degree. 49The linker H 2 PFX (with X = H or F) is depicted in Scheme 3.
The three MOFs Fe-PF1, 2 and 3, containing the mono-, bisand tetra-uoro linkers respectively are isostructural and constituted by the same 1D building unit in which Fe 3+ is sixcoordinated with an octahedral environment by the nitrogen atoms belonging to the PFX pyrazolate moieties.The structures display triangular 1-D channels where the faces are dened by the ligands and the metal nodes occupy the edges.The structure of Fe-PF4 is shown in Fig. 13.Pore size distribution from N 2 adsorption isotherms revealed a 8 Å average size in good agreement with the DFT calculations.Authors have used several experimental techniques (such as calorimetry and CP-MAS 1 H, 13 C and 19 F solid state NMR) to elucidate the behaviour of the uorinated rotors and the inuence on CO 2 adsorption properties.While in the case of mono-and bis-uorinated linker the central benzene rings are placed in parallel way along the channel direction they are held in this position by a number of F-H H-bonds among the partially uorinated linkers. 68On the contrary, in tetrauorinated analogue (Fe-PF4), the absence of the H-atoms on the ring does not permit H-bond interactions and the rings resulted tilted each other of about 30°respect the channel axis.The linker (2E,20E)-3,3 ′ -(2-uoro-1,4-phenylene) diacrylic acid (H 2 FBDA, see Scheme 1) was employed in 2019 by Zhao and co-workers for the synthesis of uorinated analogues of UiO-66 Zr-MOF. 64he MOF, namely ZJU-800, is based on hexanuclear [Zr 6 (O) 4 (OH) 4 ] 12+ clusters coordinated by the carboxylates of FBDA2 ligands to form a three-dimensional fcu structure.Fluorine atoms belonging to the central phenylene ring are exposed into the tetrahedral and octahedral pores and they confer to the MOF an increased affinity towards methane, resulting in 10 mmol g −1 of CH 4 adsorbed at 50 bar. 65Finally, 3-uoro-isonicotinic acid (HFINA, see Scheme 1) was used in 2021 by Li and co-authors for the syntheses of two F-MOF based on Cu, namely Cu-FINA1 and Cu-FINA2, which possess a bcu and 3,5-connected topologies respectively.They are both based on small size square channels (7.86 × 6.95 Å and 5.48 × 4.87 Å window size for Cu-FINA1 and Cu-FINA2 respectively) with the F-atoms exposed in the inner part of the channels. 66
CO 2 absorption and selectivity
Some of the F-MOFs here discussed have found to possess a superior affinity towards CO 2 and they may effectively be employed as physisorbents for targeted application such as absorption of CO 2 directly from air (DAC, 400 ppm concentration), from conned spaces (1-5% wt concentration) and from industrial steel, cement and thermoelectric plants point sources (CCS, 7-15% wt concentration). 54Common chemisorbents for low concentration CO 2 environment, such as aqueous alkylamine or concentrated hydroxides solutions display a high heat of absorption (Q st ) in the 80-102 kJ mol −1 range thus offering the best uptake at these conditions but with a high energy penalty due to the sorbent regeneration.
F-MOF used as physisorbents are expected to work at lower Q st comprised in the 40-60 kJ mol −1 range but with the advantage of being easily regenerated trough pressure swing or vacuum swing absorption. 30In addition, the presence of uorine renders these MOF hydrolytically stable and less prone to be degraded by water vapor and moisture.
F-MOF based on uorinated anions reported by Eddaoudi were employed for DAC applications.In particular, SIFSIX 35 and NbOFFIVE 31,54 MOFs have a ne-tuned affinity towards CO 2 measured in terms of Q st and depending on the F/F distances within the square channels and on the polarity of the F atoms when linked to a more electropositive elements such as Nb (NbOFFIVE-Ni) in place of Si (SIFSIX-3-Cu).For that concerns SIFSIX, the comparison of SIFSIX-3-Zn (Q st = 45 kJ mol −1 , F/F distance = 6.784(1)Å), SIFSIX-3-Ni (Q st = 47 kJ mol −1 , F/F distance = 6.694(1)Å), and SIFSIX-3-Cu (Q st = 54 kJ mol −1 , F/F distance = 6.483(1)Å) revealed a stronger interaction of CO 2 2) show a remarkable CO 2 absorption at 1 bar and 293 K up to 4.1 mmol g −1 being among the best performing materials in the eld (Fig. 15). 67bOFFIVE-Ni, named by the authors as KAUST-7 MOF, and an its isostructural Al based analogue KAUST-8, ([Ni(AlF 5 (-OH 2 ))(pyrazine) 2 ] 2 H 2 O), were also employed for SO 2 trace removal from ue gas and air. 28,40Also, as in the case of CO 2 , SO 2 molecules could t into the square channels and they were strongly stabilized by a number of weak interactions among the F-atoms of the pillar and the electro-positive sulphur atoms and by a net of weak H-bonds between oxygen atoms and the C-H group of the pyrazine moieties.Isosteric heat of adsorption resulted as high as those observed for CO 2 (about 65 kJ mol −1 for KAUST-7).Both MOFs were studied for their adsorption and separation properties with cyclic column breakthrough tests using different gaseous mixtures resulting in a good uptake (z2.2 mmol g −1 ) of SO 2 in a SO 2 /N 2 : 7/93 mixture.With a SO 2 / CO 2 /N 2 : 4/4/92 gas mixture a simultaneous and equal retention time in the column for SO 2 and CO 2 was observed, displaying an identical uptake of z1.1 mmol g −1 , consistent with the simulated energetic trends for both the polar molecules.Similar results were also observed for KAUST-8 MOF.
The two MOFs were also tested at lower SO 2 and CO 2 was observed, displaying an identical uptake of z1.1 mmol g −1 , consistent with the simulated energetic trends for both the polar molecules.Similar results were also observed for KAUST-8 MOF.The two MOFs were also tested at lower SO 2 concentration of 250 to 500 ppm in different gas mixture steams resulting in a very similar selectivity towards CO 2 and SO 2 (SO 2 /CO 2 selectivity z1).More recently, AlFFIVE-1-Ni (KAUST-8) and its Fe(III) analogue FeFFIVE-1-Ni, were also employed by the same research group for the complete dehydration from gas steams containing CO 2 , N 2 , CH 4 , and heavier hydrocarbons typical of natural gas. 39Dehydration mechanism was studied by several coupled experimental and theoretical techniques.
The high dehydration properties towards humid steam gases were enhanced by the high stability of the two compounds.The H 2 O Q st values for both AlFFIVE-1-Ni and FeFFIVE-1-Ni were evaluated by DSC and resulted in 63 kJ mol −1 and 64.7 kJ mol −1 , respectively.
Both materials could be mildly re-activated at relatively low temperatures (378 K) if compared with traditional salt-based absorbents.Among the F-MOFs based on uorinated linker Ce-F 4 MIL140A shows an outstanding IAST (about 1900) selectivity towards CO 2 in a 0.15 : 0.85 CO 2 : N 2 mixture at 293 K and 1.The isotherm is S-shaped, typical of so called "phase change" materials and the pores undergo CO 2 saturation over a small pressure range. 29his high affinity towards CO 2 could be explicated trough a concerted mechanism of CO 2 coordination on an unsaturated site on the Ce coordination sphere generated upon activation and favorable F-C interactions of central CO 2 carbon atom with the uorinated rings.Finally, Fe-PFx MOFs discussed in the previous section have a considerable affinity towards CO 2 in terms of overall loading, under the mild conditions of 298 K and 1 bar.In particular, Fe-PF2 MOF reached 3.2 mol g −1 , whereas the enthalpies of adsorption, measured through microcalorimetry ranged from 28 up to 33 kJ mol −1 for the fully uorinated Fe-PF4 MOF. 68PF-MOF with MOF-801, reported by Morelli Venturi in 2022 showed a certain selectivity for CO 2 thanks to the presence of the uorinated chains.Rising trends in the Q st and in calculated IAST selectivity (MOF-801 < PFMOF-2 < ZrTFS), proportional to the quantity of the uorine incorporated in the framework, occurred.Despite an ideal IAST selectivity (100%) ZrTFS presents a lowering in the total amount of CO 2 captured (2.5% wt at 298 K and 5.4% wt at 273 K) due to a window size limitation in the diffusion through the MOF pores of gas molecules.Instead, PFMOF-2 is a good compromise between selectivity (41%) and CO 2 captured (9.3% wt at 298 K and 12.2% wt at 273 K). 30
Separation of light hydrocarbons and uorocarbons
In the last few years, F-MOFs were found to be highly efficient also for the separation of gaseous light hydrocarbons both among them or from streams containing other molecules like CO 2 /H 2 /N 2 and water.
This section deals mainly with the application of some selected F-MOFs for the separation of light hydrocarbons, particularly those with few carbon atoms (C2/C3).A brief overview on the separation of uorocarbons gases is also included.Separation of light hydrocarbons is a challenging issue in industry owing to their similar physical properties, which make the conventional separation techniques very difficult.In this regards MOFs represent valuable materials to be employed and developed.Among light hydrocarbons C 2 H 2 is an important gas used as a fuel in welding and widely used as reagent in various industrial processes to form plastics, acrylic acid derivatives, etc. 69 On the other hand also propylene is used for the synthesis of several value-added products as polypropylene, acrylonitrile and propylene oxide. 70,71A very important separation is that between propyne/propylene (C 3 H 4 /C 3 H 6 ) which is considered one of the most challenging and desired processes. 72Pillaring uorinated MOFs of the SIFSIX family previously described are well performing materials for such kind of application.For instance, SIFSIX-3-Ni and SIFSIX-2-Cu were found to have high selectivity towards propylene in the C 3 H 4 /C 3 H 6 separation but low loading stability in humid condition.This was partially due to the low stability of SiF . 44However also this compound was found to be not stable upon activation, probably for the formation of pentacoordinated Cu moieties.A very recent paper from Zhang and co-authors reported the synthesis of a very stable MOF, namely ZNU-2 based on TiF 6 2− anion Cu 2+ and tri(pyridin-4yl)amine (Tripa). 73The MOF based on the tritopic Tripa linker connected the copper atoms to form complex icosahedral cages pillared by the TiF 6 . This compound displayed high loading capacity for C 3 H 6 and C 3 H 4 at 298 K and 1 bar of 7.7 and 5.3 mmol g −1 respectively.C 3 H 4 /C 3 H 6 selectivity on ZNU-2 at 298 K was calculated by using ideal adsorbed solution theory (IAST) and for a 1/99 physical ideal mixture it was found to be 12. 5 74 They are based on pores of small size decorated with polar groups.They normally displayed a C 2 H 2 > C 2 H 4 > C 2 H 6 selectivity order, thus preferring fully unsaturated molecules.The MOF ZnBFA reported in 2022 by Zhao and co-workers and discussed in the previous section was employed for C2 hydrocarbon separation and the fully uorinated 1D channels, constructed from the H 2 BFA linker, strongly interacted with the C2 molecule but with a reverse order respect that normally observed. 47he adsorption amounts of C2 hydrocarbons were reported by the authors and they were found to be 1.35 mmol g −1 and 1.25 mmol g −1 at 273 K and 298 K under atmospheric pressure for C 2 H 6 , respectively, 1.27 mmol g −1 and 1.14 mmol g −1 for C 2 H 4 and 1.17 mmol g −1 and 1.03 mmol g −1 for C 2 H 2 under the same conditions. 48In terms of Q st , ZnBFA exhibited a high enthalpy of adsorption for C 2 H 6 of 42.8 kJ mol −1 , a value higher than that measured for C 2 H 4 (39.8 kJ mol −1 ) and C 2 H 2 (29.7 kJ mol −1 ).DFT calculations were carried out in order to understand the higher C 2 H 6 selectivity and it was justied by the presence of a stable H-F bonds net between the methylene groups and the uorine atoms of the MOF channel, as depicted in Fig. 16. 48he uorine-substituted o-phthalic acid-based MOFs (TKL 105-107) reported by Yu and co-authors in 2023 showed good selectivity for C 2 H 6 over C 2 H 4 and were employed for such kind of separation.The adsorption capacities of TKL-105, TKL-106, and TKL-107 at 298 K and 1 bar were 4.44 mmol g −1 , 4.51 mmol g −1 and 5.24 mmol g −1 of C 2 H 4 , respectively.The C 2 H 6 uptakes of TKL-105, TKL-106, and TKL-107 at 298 K and 1 bar was 5.62 mmol g −1 , 5.61 mmol g −1 and 6.0 mmol g −1 respectively.
The porosity of the modied MOFs with respect to the prototype was strongly and complexly modied in terms of pore volume and channel opening.Overall, the reduction of the channels windows (from 11.1 × 11.1 Å 2 to 5.6 × 5.6 Å 2 ) due to the insertion of the bicarboxylic linker in the pocket A leads to the formation of supermicroporous environment decorated with the -CF 3 groups of the internal uorinated linker able to strongly interact with light hydrocarbons.
The four MOFs were employed for ethylene/ethane separation due to the high thermodynamic affinity of the modied MOF towards ethane in comparison to the precursor LIFM-28.The C 2 H 6 uptake amounts of LIFM-61/31/62/63 were 2.6, 4.0, 4.5, and 4.8 mmol g −1 at 273 K, respectively, which are values higher than those observed for C 2 H 4 (2.1, 3.0, 3.3, and 3.7 mmol g −1 , respectively).The highest uptake was observed for LIFM-63, which also possesses the highest values for Q st and IAST selectivity compared to the other MOFs.The occurrence of more favourable C-H/F and C-H/p interactions of C 2 H 6 molecule with the framework with compared to C 2 H 4 was responsible for the high affinity of this MOF towards ethane. 76he MOFs Cu-FINA1 and 2, reported in 2021 were also employed for C2 and C3 separation and compared with the non-uorinated Cu-INA MOF. 66The results showed an enhanced affinity of the F-MOF towards acetylene (C 2 H 2 ) and propyne (C 3 H 4 ) species over the more saturated molecules.The adsorption isotherms of C 2 H 2 and C 3 H 4 exhibit steeper slope than those of C 2 H 4 and C 3 H 4 under low pressures (<30 kPa), indicating stronger affinity or higher packing efficiency of Cu-FINA-1 and 2 to alkynes.Authors also performed dynamic breakthrough measurements with column packed with the Cu-INA and CU-FINA1 and 2 respectively.Cu-FINA-1 had the highest separation efficiency for C 2 H 2 /C 2 H 4 mixture aer normalization and the retention time of C 2 H 2 obeys the order Cu-FINA-1 (8.1 min g −1 ) > Cu-FINA-2 (1.2 min g −1 ) > Cu-INA (0.2 min g −1 ).For C3 hydrocarbons the order of retention was the same of C2, thus preferring the unsaturated C 3 H 4 molecule and the retention times were CuFINA-1 (15.8 min g −1 ) > Cu-FINA-2 (4.2 min g −1 ) > Cu-INA (0.5 min g −1 ) for a C 3 H 4 /C 3 H 6 (50/50) mixture.Also, as in the previous case, theoretical calculations were used by the authors to simulate the interactions of the guest molecules into the pores and the occurrence and strength of F-H bonds was also evaluated to clarify the observed selectivities. 66nother important separation process is the C 2 H 2 /CO 2 .C 2 H 2 and CO 2 have similar size and their boiling points are almost the same.These facts make C 2 H 2 /CO 2 separation a challenging goal.About that, an important paper on the role of substituent effect on the micropores of a multivariate MOF, namely UPC200, was reported in 2020.An Al-based MOF, constructed from [Al 3 (m 3 -O)(OH)(H 2 O) 2 ][COO] 6 clusters, benzimidazole (BIM) and the linker H 3 TTCA-F, demonstrates high C 2 H 2 uptake and good C 2 H 2 /CO 2 separation efficiency (C 2 H 2 /CO 2 uptake ratio of 2.6), affording new benchmark C 2 H 2 /CO 2 productivity from C 2 H 2 /CO 2 (50/50) mixture under ambient conditions. 77e-F 4 MIL140A, the peruorinated Ce-MOF with MIL-140 structure, was also investigated by Zhao and co-authors in 2021 for the separation of C 2 H 2 over CO 2 .It exhibited an inverse CO 2 -selective sorption behaviour.Authors measured a CO 2 uptake of Ce-F 4 MIL140A of 110.3 cm 3 cm −3 at 298 K, much higher than that of C 2 H 2 (41.5 cm 3 cm −3 ), giving rise to a CO 2 / C 2 H 2 uptake ratio of 2.66.Interestingly, authors also prepared the analogue Zr-F 4 MIL140A material, which had an inverse behaviour in terms of selectivity towards ethylene.IAST selectivity for a CO 2 /C 2 H 2 (1/2) mixture reached 9.5 at 298 K whereas at 273 K, the selectivity increased up to 41.5.78 Column breakthrough experiments for different and the different orientations of the three molecules inside the micropores in order to computationally conrm the experimental Q st values and the observed selectivities.
Grand-Canonical MonteCarlo Simulations (GCMCS) were also used by the authors to model the adsorption sites of acetylene molecules inside the pore walls.The preferential binding site for C 2 H 2 molecule was found to be located in the top of a trigonal bipyramidal cage with three F atoms pointing toward the interior wall of cage.C 2 H 2 interacts with the strong electronegative F atoms as depicted in Fig. 17. 62 The last important application of F-MOF here discussed concerns their adsorption and separation properties towards uorocarbons and chloro-uoro carbons (CFC). 82Such halogenated gases are harmful and critical compounds in which one or more hydrogen atoms have been replaced with uorine and chlorine.They are commonly used as refrigerants, propellants, in the electronic industry and for their hydrophobic properties in foams.
While CFC have been banned since more than a decade, uorocarbons are powerful greenhouse gases and they may also contribute to climate changes. 83he fully uorinated MOF LIFM-86 reported in 2017 has been tested for application in R22/N2 separation.The IAST selectivity of R22/N2, over the parent compound LIFM-28 was increased by 6-folds, reaching a value of over 250 at zero coverage, with an isosteric heat of adsorption towards R22 of 30 kJ mol −1 .
Conclusions
This review aimed to give an overview on the structural features and the main application of MOFs containing uorine atoms both as anionic units or as coordinating element of a more complex inorganic units, and therefore directly linked to the structural metals or as part of uorinated linkers used in the MOFs synthesis.Due to the strong polarization of M-F bond the rst family of MOFs here discussed provided a very effective selectivity towards some guest species like CO 2 , thus resulting in efficient materials for direct absorption of CO 2 from air or conned space.Such part of the work was mainly developed by the group of Mohamed Eddaoudi who presented several structurally related materials that have been thoroughly investigated for DAC, CO 2 separation and dehydration.The second part of the review deals with F-MOFs based on uorinated linkers.In addition, in this case the presence of uorine was found to be crucial for increasing the affinity towards CO 2, also being demonstrated by the comparison with non-uorinated counterparts.In both cases the presence of supermicropores decorated with uorine ions strongly enhance the affinity of such materials towards CO 2 and for separation of light hydrocarbon.Another important strength of F-MOFs is their high resistance to hydrolysis and, in some cases, their enhanced hydrophobicity.Thanks to the unique features that uorinated element, both in the inorganic and organic unit, a new route in the synthesis of MOFs for CO 2 capture and separation is established, i.e., the use of uorinated analogous linkers in the preparation of the most common MOFs.However, the main challenge in this strategy is the synthesis of the linkers that not always are commercial and at low costs.Concerning the possible impact of this research for industrial applications it can be remarked that these materials could be effectively employed for separation of gases (mainly CO 2 at low concentrations) from primary emission sources both in pre-and post-combustion technology.Manufacturing of these compounds for large-scale application still requires further investigations for that concerns the cost and other issues such as the compatibility in organic polymers (gas separation mixedmembranes), preservation of the structure/properties features under pelletizing and so on.
These nets are linked to each other in the third dimension by a pillaring SiF 6 2− to form frameworks with a primitive cubic topology.These compounds were inspired by a previously reported compound with the formula [Cu(4,4 ′ -bipyridine) 2 (SiF 6 )] n (SIFSIX-1-Cu).The use of 4,4 ′ -dipyridylacetylene (dpa) afforded a compound with the same topology (SIFSIX-2-Cu) but with increased pore size due the longer size of the dpa compared to 4,4 ′ -bipyridine.The family of SIFSIX-3-M was designed from the perspective of switching the porosity towards the ultramicropore range.The use of a shorter pyrazine linker, with respect to the previous ones, shortened the metal-metal distance along the plane but maintained the same size due to the SIF 6 2− anion, resulting in a pore of 3.84 Å (measured along the diagonal) which has dramatic implications for the pure physisorbing mechanism of CO 2 at very low partial pressures.The rst reported SIFSIX-3 MOF were based either on Zn, Ni or Cu and have formula M(II) SiF 6 (pyrazine) 2 $2H 2 O.The structure of SIFSIX-1 is shown in Fig. 1.In a recent paper DFT and Grand Canonical Monte Carlo (GCMC)-based simulations were used to model the theoretical structure and the CO 2 absorption properties of several MOFs derived from SIXSIF-Ni-pyridine.
Fig. 6
Fig. 6 (a) The coordination environment of ZnO 4 and FBA 2 linkers.(b) Orthographic views down the c axis of Zn-FBA.(c) The hexagonal pore structure in Zn-FBA along the c axis illustrated by the Connolly surface in orange (zinc = lavender; oxygen = red; carbon = gray; fluorine = light blue; hydrogen = white).Reproduced from ref.48 with permission from John Wiley & Sons, copyright 2022.
Fig. 7
Fig.7Crystal structure of LIFM-100 phases with narrow pore (a) and with large pore (b), extended view along the a direction.Colour code: copper olive green, carbon grey, nitrogen blue, fluorine green, oxygen red.51
F 4 -
Fig.9Crystal structure of Ce-MIL140A, view along the c axis.Color code: cerium cluster orange, carbon grey, fluorine green.29
Fig. 10
Fig. 10 Comparison of the crystal structure viewed along the c axis and the local environment around the adsorption site of the as-synthesised (a and b, respectively), evacuated (c and d, respectively) and CO 2 -loaded (e and f, respectively) forms of Ce-F 4 MIL140A.Colour code: Ce, orange; F, green; C, grey; O, red; H 2 O, blue.H atoms not shown because their positions cannot be determined from PXRD data.Reproduced from ref. 58 with permission from the Royal Society of Chemistry, copyright 2023.
Fig. 11
Fig. 11 Chemical structure of the ligands (A).Polyhedral representation of the two MOFs (B and C) and rotor disorder drawn as ellipsoid of nodes and linkers (D).The thermal ellipsoids for the Al(OH)(COO) 2 nodes as well as the FTR and FTR-F2 rotors in both MOFs are reported, as derived from PW-DFT phonon calculations.The ellipsoids are displayed with a 95% probability factor.Atom labeling: hydrogen = white, carbon = grey, oxygen = red, fluorine = green, aluminum = purple.Reproduced from ref. 61 with permission from John Wiley & Sons, copyright 2022.
Scheme 3
Scheme 3 Synthetic conditions for the preparation of the three fluorinated bis(pyrazolyl)-based MOFs.49
Fig. 14 (Scheme 4
Fig. 14 (a) CO 2 adsorption isotherm for NbOFFIVE-1-Ni up to 1 bar and 298 K. (b) CO 2 adsorption isotherms for NbOFFIVE-1-Ni at different temperatures.(c) Comparison of the CO 2 uptake at low pressures between NbOFFIVE-1-Ni and the SIFSIX family as well as the Mg-MOF-74, one of the best MOF for low-pressure CO 2 adsorption.(d) CO 2 heat of adsorption for NbOFFIVE-1-Ni as compared to that of SIFSIX-3-Ni and SIFSIX-3-Cu, determined using multiple CO 2 adsorption isotherms as well as TG-DSC measurements.Reprinted with permission from ref. 54.Copyright 2016 American Chemical Society.
Fig. 15 (
Fig. 15 (a) CO 2 data for 1 and 2 at 298 K.The inset shows the steeps lope for 1 and 2 up to 50 Torr.(b) Q st in 1 and 2 calculated from the 258, 273, and 298 K adsorption isotherms.Reprinted with permission from ref. 67.Copyright 2013 American Chemical Society.
Fig. 16
Fig. 16 DFT-D calculated (a) C 2 H 2 , (b) C 2 H 4 , and (c) C 2 H 6 adsorption locations in Zn-FBA.The unit for bond length is Å, carbon, fluorine, and hydrogen atoms in the framework represented by grey, green, and white, respectively, carbon and hydrogen atoms in adsorbate represented by orange and white, respectively, carbon and hydrogen atoms in adsorbate represented by orange and white, respectively.Reproduced from ref.48 with permission from John Wiley & Sons, copyright 2022.
C 2 H 2 /C 2 H 4 and CO 2 /C 2 H 2 gas systems were employed by Belmabkhout and co-authors in 2018 to test the separation performances of two isoreticular F-MOFs belonging to the MFFIVE-1-Ni family, namely NbOFFIVE-1-Ni and AlFFIVE-1-Ni with [NbOF 5 ] 2− and [AlF 5 ] 2− as pillars.The supermicroporous environment together with potential open metal sites, as in the case of AlFFIVE-1-Ni, resulted in favourable interactions towards C 2 H 2 but in decreasing affinity towards CO 2 .Absolute absorption of C 2 hydrocarbons for the two F-MOFs was rst evaluated at 298 K up to 1 bar of pressure resulting in 0.7 mmol g −1 of C 2 H 4 at 1 bar for NbFFIVE-1-Ni and 1.15 and 2.4 mmol g −1 at 0.1 and 1 bar, respectively for AlFFIVE-1-Ni.C 2 H 2 adsorption isotherm for AlFFIVE-1-Ni resulted in an uptake of ca 1.0, 3.2 and 4.6 mmol g −1 vs. ca.0.023, 0.58 and 2.4 mmol g −1 for NbOFFIVE-1-Ni, respectively at 0.01, 0.1 and 1 bar.Variable temperature adsorption isotherms of C 2 H 2 and C 2 H 4 at 273, 298 and 313 K were used to calculate the Q st resulting in 38 kJ mol −1 for C 2 H 2 vs. 34 for AlFFIVE-1-Ni and NbFFIVE-1-Ni respectively and 25 to 31 kJ mol −1 for C 2 H 4 in AlFFIVE-1-Ni and NbFFIVE-1-Ni respectively.C 2 H 2 /C 2 H 4 : 50/50 adsorption column breakthrough experiment were also collected at 298 K and the results showed that NbOFFIVE-1-Ni retained 50% more C 2 H 2 than AlFFIVE-1-Ni, while C 2 H 4 was 50% less retained in NbOFFIVE-1-Ni.This result indicated a better selectivity of NbOFFIVE-1-Ni towards bulk C 2 H 2 in the feed in agreement to the higher Q st of AlFFIVE-1-Ni for C 2 H 2 .Finally, AlFFIVE-1-Ni was found to effectively retain more C 2 H 2 than Nb analogue also by using dilute C 2 H 2 /C 2 H 4 : 1/99 mixtures.This work evidenced as the ne tuning of isosteric heat of adsorption by changing the metal nature of the building block is a key factor to enhance the separation performance of the MOF towards C 2 H 2 in dilute C 2 H 4 or CO 2 feeds. 79A remarkable result for ethylene purication in a ternary C 2 H 4 /C 2 H 2 /CO 2 mixture was achieved by Zaworotko and coworkers in 2021 with a ultramicroporous pyrazine-based MOF of the pcu SIXSIF family, namely MFSIX-17-Ni. 80The MOF with the formula [Ni(pyz-NH 2 ) 2 (TiF 6 )] n , is the analogue of the SIFSIX-17-Ni, containing SIF 6 2− anion, reported in 2018 by Chen and co-authors and studied for propyne/propylene separation. 81In this paper authors studied both Si and Ti containing MOF for binary and ternary mixtures separation (C 2 H 4 /C 2 H 2 and C 2 H 4 /C 2 H 2 /CO 2 ) trough column breakthrough experiments.The results of the ternary mixture, evaluating the C 2 H 4 effluent streams from the SIFSIX-17-Ni and TIFSIX-17-Ni xed beds revealed C 2 H 4 purity as high as 99.958% and 99.912% with high-purity ethylene productivities of 7.2 and 15.8 cm 3 g −1 .DFT calculations were employed to calculate the binding energy, the occurrence of non-covalent interactions (CH/F and F/OC])
extensively developed in the last decade with the pioneering work of Eddauodi and co-workers.One of the rst MOFs based on uorinated anions was reported by Adil and coworkers in 2013 where they synthetised a mixed Cu/Al metal F-MOF with the formula CuAlF 4.5 (OH) 0.5 (H 2 O)[HAmTAZ] 2 (HAm-TAZ = 3-amino-1,2,4-triazole).The structure is composed of copper-triazole square grid layers pillared by the AlF 5 (H 12O) octahedra, generating a three-dimensional network with a pcu topology.39In2013a series of super microporous F-MOFs, denoted as SIFSIX-3M were reported by the Eddaoudi group.These compounds, with the general formula MSiF 6 (pyrazine) 2 $2H 2 O and M = Ni, Cu and Zn, have the common feature of being based on the same structural motif, which is a square grids (sql) 2D net constituted of metals coordinated in the plane by pyrazine.12 . Increasing the ratio of C 3 H 4 in the gas mixture leads to improved C 3 H 4 /C 3 H 6 selectivity, at 13.7 and 16.2 for 10/90 and 50/50 C 3 H 4 /C 3 H 6 mixtures, respectively.Q st values at near-zero loading for C 3 H 4 and C 3 H 6 were 43.0 and 34.5 kJ mol −1 .These values are slightly lower than other MOFs for similar applications but allow facile recovery of C 3 H 4 by desorption under mild conditions.Another important eld of application concerns the C2 hydrocarbons separation, namely acetylene (C 2 H 2 )/ethylene (C 2 H 4 )/ethane (C 2 H 6 ).In literature there are several examples of MOFs used for such kind of separation.
RE-fcu MOF, based on Dy and with the formula [(CH 3 ) 2 -NH 2 ] 2 [Dy 6 (m 3 -OH) 8 (FTZB) 6 (H 2 O) 6 ], was recently reported by Zhou and co-workers for the separation of C 2 H 2 /C 2 H 4 and the selective absorption of benzene.The linker used for the MOF construction, shown in Scheme 4, is the mixed carboxylate/tetrazolate (2-uoro-4-(1H-tetrazol-5yl)benzoic acid) (H 2 -FTZB).The activated MOF was used for gas separation by breakthrough curves and the adsorbed values of C 2 H 2 , C 2 H 4 and CH 4 reported at 1 atm and 273 K were 140.4,114.3 and 29.3 cm 3 g −1 , value (for acetylene) higher than many of known MOF used for light hydrocarbon separation.The reported Q st values of acetylene (C 2 H 2 ), ethylene (C 2 H 4 ) and methane (CH 4 ) were 26.7, 21.1 and 16.3 kJ mol −1 respectively, revealing favourable interactions of the framework towards C2 hydrocarbon and the potential adsorption selectivity of C 2 H 2 , C 2 H 4 against CH 4 . 75 | 2023-10-07T05:10:23.751Z | 2023-10-04T00:00:00.000 | {
"year": 2023,
"sha1": "cf4dcbaa0dd067cc119f7c3ba35f43d124616056",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "cf4dcbaa0dd067cc119f7c3ba35f43d124616056",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234619323 | pes2o/s2orc | v3-fos-license | Infants’ intermodal perception of numerosity in an experimental study with objects and socially salient stimuli
According to Gelman & Gallistel (1978), the ability to count is the preeminent mechanism by which young children understand numbers. However, a global evaluation of number is encountered in young infants long before counting and precise computational skills. Klahr & Wallace (1976) assumed that infants’ detection of numerosity is a rapid perceptual process of immediate apprehension of numerosity of an array, a skill which is called subitizingsee also Benoit, Lehalle and Jouen (2004). This early ability is considered to be innate and prior to the ability of counting which is a socially transmitted verbal labelling. It seems that early counting skills are ΨΥΧΟΛΟΓΙΑ, 2012, 19 (4) 398-414 PSYCHOLOGY, 2012, 19 (4) 398-414
preceded by a more primitive, direct perceptual awareness of numerosity. Starkey, Spelke & Gelman (1983) have already shown that 7 month-old infants are able to intermodally detect the numerical invariant between visual and auditory stimuli. They found that infants look longer at visual stimuli whose numerosity corresponds to the number of sounds they listen to, suggesting that infants are able to perceive numerical invariants across two modalities. Nevertheless, Mix, Levine & Huttenlocher (1997) failed to replicate the above mentioned results. They had drumbeat sequences equated for either rate or duration, to ensure that these cues were not informative, but infants looked either to matching or non-matching stimulus. Kobayashi, Hiraki, Mugitani & Hasegawa (2004) found that 6-8 month-old infants show intermodal numerical perception, while toddlers fail in similar tasks.
The present study attempts to assess early infants' ability of cross-modal perception of numerosity (see Starkey et al. 1983Starkey et al. , 1990. Moreover, this research introduces an additional factor, by including a contrast between object stimuli (photos of objects accompanied by mechanical sounds) and socially salient stimuli (photo of face accompanied by voice). Before proceeding to the present research, a brief literature review of similar studies might be helpful.
Experimental data on early infant numerical abilities
Neonates and young infants discriminate 'two' from 'three' among visual stimulus arrays consisting of small number sets and among auditory stimuli of two or three syllables (Starkey & Cooper, 1980. Van Loosbroek & Smitsman, 1990. Bijeljac-Babic, Bertoncini & Mehler, 1991. Infants aged 4 to 12 months add, subtract and find impossible events surprising, for example, they look longer at events such as 1 + 1 = 1, or 2 -1 = 2, than at events such as 1 + 1= 2, or 2-1 = 1 (Cooper, 1984. Simon, Hespos et Rochat, 1995. Wynn, 1992, 1996. Koechlin, Dehaene & Mehler, 1997. Xu & Spelke, 2000. The method used by these authors is the violation-of-expectation paradigm. When objects magically appear or disappear, infants seem to be surprised and they focus their attention to the unexpected event. According to Marks & Cohen (2002), the above results do not really indicate that 5-month-old infants can discriminate the exact numerical difference between stimuli; instead, these results indicate that infants focus their attention to the stimuli either because of habituation effect or because of bigger numerical sets of stimuli ("more items to look at" model).
Nevertheless, Kobayashi et al. (2004) found that 6-month-old infants are able to recognize basic arithmetic operations across sensory modalities (e.g. 1 object + 1 auditory tone = 2). In their violation-of-expectation paradigm, neither familiarity nor complexity of stimuli affected the results. Mix, Huttenlocher & Levine (2002) assumed that perhaps infants really do process discrete numbers, but we cannot tell from the existing studies. Mix et al. (2002) suggest that infants are sensitive to differences in spatial extent and other perceptual variables (e.g. surface area, contour length, rhythmic patterns etc.), rather than to discrete number changes.
Infants tend to focus their attention in a way that optimizes overall arousal. According to Moore, Benenson, Reznick, Peterson & Kagan (1987), because of optimal stimulation seeking, in similar experiments, infants look more at non-matching stimuli. Another bias is that rhythmic patterns in such tasks might influence infants' ability to intermodally detect numerical equivalence. According to Mix et al. (2002), number and rhythm cannot be tested separately. The phenomenon seems to be more complex: rhythmic patterns in auditory stimuli should be better controlled so that they will not intermix with numerical discrimination, for even the same duration of sounds could lead to different rhythms. Mix et al. (2002) suggest that there is no need to posit a representation of discrete number. Instead, a developmental account that assumes only representations of spatial and temporal cues in infancy would be sufficient. The 'accumulator model theory' (Wynn, 1998a) suggests that each object is enumerated as an impulse of activation from the nervous system. To extract number (or time), the accumulator stores each impulse until the end of counting (or timing), and then transfers this information into memory where it outputs one value for the impulses counted. The above theory was derived from experiments on numerosity with rats (Meck & Church, 1983).
Explanatory models of early infant detection of numerosity
Object-file theory (Uller, Carey, Huntley-Fenner & Klatt, 1999) suggests that success in arithmetic tasks may reflect nothing more than already well documented physical reasoning abilities (see also Simon, 1997). According to Baillargeon (1994) infants may build a model of objects (in the violation-of-expectation paradigm), updating this model when new objects are added or taken away. Moreover, contrary to symbolic models, presupposing the existence of an early ability to construct abstract representations of number (see Gellman & Gallistel, 1978), "object-file" model facilitates short-term and working memory. "Object-file" model suggests that precise small numbers may be represented by a different system, used by adults for object-based attention and tracking.
Consistent with object-file theory are the results of a research with 6-month-old infants (Feigenson, 2011), according to which, infants can compare numerical information obtained in different modalities using representations stored in memory. The above results indicate the existence, since birth, of an Approximate Number System. According to Trick & Pylyshyn (1994), two parallel mechanisms are responsible for number perception: one pre-attentive mechanism is responsible for approximate representation of small numbers and one attentive mechanism is responsible for counting and precise enumeration. Finally, an analog magnitude system may underlie success with larger numbers, which is concerning approximate large number quantification (Wynn 1998, b). Dehaene, Spelke, Pinel, Stanescu & Tsivkin (1999) suggest a combination of two models, exact arithmetic and approximate arithmetic. Exact arithmetic is characterized by a language specific format and recruits networks involved in word association process. On the contrary, approximate arithmetic is language independent and relies on a sense of numerical magnitudes and visuo-spatial processing. Arithmetic intuition may emerge from the interplay of these two brain systems.
According to Mix et al. (1997), early cross-modal perception of numerosity is not a matter of equivalence. Infants use temporal characteristics of the overall sequences rather than the number of individual sounds. Thus, intermodal matching is achieved on the basis of overall amount, rather than on number itself. Feigenson et al. (2002) suggest that infants rely on multiple mechanisms, some nonnumerical, in tasks that have been interpreted as addressing numerical competence. In our opinion, numerical discrimination might derive from nonnumerical properties of physical stimuli. In infants' attention numerical properties of physical stimuli cannot be separately viewed from continuous information (see also Mix et al., 1997). The case of intermodal matching is one case of event processing -infants perceive objects combined with sounds, extended in space and time.
According to Theory of Direct Perception, the senses are unified at birth (Gibson, 1969). Shape, intensity level, motion, number and rhythm are experienced directly as global, amodal perceptual qualities (Stern, 1985). Objects and events have nested properties that are detected in the context of increasing specificity (Bahrick, 2001). Detection of small numbers in early infancy seems to be a complex cognitive ability that might include: a) multimodal perception of physical stimuli, b) approximate perception of relative numbers (few vs. many), c) perceptual ignorance of the qualitative differentiation of visual stimuli and d) abstraction of the numerical correspondence through concentration on the common quantity between visual and auditory stimuli.
Infants' perception of social stimuli
Attention to human face by 5-month-old infants is characterized by a preference for complex animated face, while preference for complex and unfamiliar face increases with age (Sherrod, 1979). Recent research on development of infants' social cognition shows that, since birth, the human system detects social agents on the basis of both innate mechanisms and perceptual experiences (Simion, di Giorgio, Leo & Bardi, 2011). Newborns prefer face-like stimuli over distractors and older infants gradually focus their attention on faces (Frank, Vul & Johnson, 2009). According to Simion, Turati, Valenza & dalla Barba (2007), newborns' face preferences are due to a set of non-specific constraints that stem from human visuo-perceptual system rather than to a representation bias for faces. Face perception during early infancy is partially explained by the innate predisposition of a subcortical mechanism which tunes infants' attention towards preference for face-like stimuli (Mondloch, Lewis, Budreau, Maurer, Dannemiller, Stephens & Kleiner-Gathercoal, 1999).
Why is the human face such an attractive stimulus? According to Werner (1948), face perception -as part of physiognomic perceptioninvolves the direct experience of amodal qualities by the infant. These qualities are rather categorical affects than perceptual qualities such as shape, intensity or number. Amodal affect arises from experience with human face in all its emotional displays. Stern (1985) stressed the importance of the supra-modal form of perceived information in infancy. According to Stern, "infants act upon abstract representations of qualities of perception" (1985, p. 51). From this early human ability stems the organization of experiences concerning perception of an emerging self and other.
Faces and voices pervade perceptual experience from the moment an infant is born. Infants possess an early ability to intermodally perceive human face. Intermodal relations between face and voice are crucial for the acquisition of linguistic, social and emotional skills.
Young infants can identify a face by hearing her voice, they can discriminate the synchrony between face and voice, coordinating the two stimuli in a spatio-temporal basis (Spelke & Cortelyou, 1981). This evidence may imply that there is an early tendency for spatial coordination between visual and auditory perception of the face. The human face is considered to be a dynamic social stimulus that attracts infants' attention in a way that related early perceptual strategies develop as cognitive procedures. The human face represents a unique, highly salient and ontogenetically significant stimulus which provides critical cognitive and social information (Simion et al., 2007), regarding identity (Valentine, Edelman, & Abdi, 1998), direction of attention (Langton, Watt & Bruce, 2000), intentions (Baron- Cohen, 1995) and emotions (Ekman, 1982).
Aims of the study
In the present research we used the same methodological paradigm (preferential looking technique) as in Starkey, Spelke & Gelman' s (1990) research, in order to investigate infants' intermodal perception of numerical correspondences between auditory and visual stimuli. Nevertheless, this study differs from Starkey et al. (1990) methodology in two ways: a) the auditory stimuli here are piano sounds, instead of drum beats and b) in two additional experimental Conditions we inserted social stimuli in the task, namely voice and face of mother or of an unknown woman.
More specifically, in the present experimental cross-sectional study we were interested in investigating the early infants' ability to detect numerical matching of two-dimensional stimuli across two modalities. We also tested the hypothesis that shape variation of visual stimuli would hinder infants' ability to intermodally detect numerical invariant. As we already mentioned, Kobayashi et al. (2004) had found no effect of shape complexity in infants' intermodal perception. However, accordingly to Cohen & Marks (2002), we assumed that similarity of the shape of simultaneously projected objects would facilitate infants' intermodal perception of numerosity. Moreover, we tested the possible role of socially salient stimuli (face and voice) in numerical amodal perception. The possible role of social cues in infant numerical perception has been little investigated by relative studies. Therefore, in accordance with Patterson and Werker (2002), we tested the hypothesis that social stimuli would attract infants' attention in such a way, that infants would be distracted from intermodally perceiving the numerical invariant. Additionally, we tested the hypothesis that familiarity of the mother's facevoice -compared to the unfamiliar face-voice of a stranger woman -would further affect infants' intermodal detection of numerosity. Finally, we were interested in possible age and gender effects on the particular perceptual phenomena.
Subjects
In accordance with ethics of research with children, in our study, all parents were asked to give written permission tο the researchers so that their infant could participate in the research. Infants' families were recruited by the aid of obstetricians, gynaecologists and paediatricians who worked in the city of Rethymno, in Crete. All infants who participated in the research were born by full-term gestation and natural delivery (preterm and caesarean gestations were excluded from the research). At a first stage we sent a letter to the parents giving information about the study to be held. We explained to the parents that we aimed at investigating early infant perception. The researchers first visited the infant at her home, discussed the nature of the study with both parents and gave an appointment at the laboratory in a time of maximum two weeks. During our visit at home, the researchers took the photo of the mother who was told not to change her hair look (hair-cut or colour) until she would arrive to the laboratory. We also tape-recorded the mother uttering "La". In the meantime, we reproduced the mother's voice in a Sound Laboratory so that we could get the auditori stimuli (LA, LA-LA and LA-LA-LA) in steady rhythm, pitch and tonality.
At the laboratory, the infant sat on her mother's lap and prior to the experiment, the mother was told not to intervene with her infant's reactions. As soon as the infant started to be uneasy or sleepy, or as soon as she started to cry, the experiment was terminated.
Initially, 140 infants were examined in a crosssectional experimental design. Several infants got asleep during the test (N=21), other infants started to cry or be uneasy (N=21) and several mothers intervened with their infants' reactions, contrary to the instructions of the researchers (N=20). In all these cases (N=62), the experimental procedure was immediately stopped by the researchers. Consequently, these infants did not fulfill the task and their responses were excluded from both microanalysis and statistical analysis.
Stimuli
We chose the objects used in the projected slides (visual stimuli) on the basis of shape complexity. Therefore, we used images of a ball (simple circular stimulus), a comb (simple linear stimulus), a spoon (less simple stimulus, combining circular and linear arrangement) and a rattle (more complex stimulus). During the interview at home (about two weeks before their visit to the laboratory), parents were encouraged to get their infants habituated with these objects at home.
Visual stimuli (images of objects and theface of mother or of an unknown woman) were shown through two slide-projectors. The images were projected at a distance of 2 meters before the infant's visual field and the distance between the two simultaneously projected images in each trial was 20 cm. The slides' color background was blue and the slides' dimensions were 108x80 cm. In each trial two different numerical combinations of items were projected (e.g. 1 ball on the right side of the infant's visual field and 2 balls on the left side of the infant's visual field). In some trials the face of the mother or of an unknown woman was projected. The photo of the face was taken about two weeks before the visit of the infant at the laboratory. All images were elaborated through Photoshop software, in order for the contour to be steady -color, brightness and contrast of the images were kept identical across trials.
Auditory stimuli were produced either by a piano or by mother's or an unknown woman's voice. In the case of piano sound, one, two or three La tones were produced and, in the case of voice sound (La tonality), one, two or three La syllables were produced in staccato pitch. The mother's voice was tape-recorded about two weeks before the infant visited the laboratory. As already mentioned, the voice was elaborated in a professional sound studio and it was tuned in La tonality. Consequently, pitch, rhythmic pattern, loudness, duration and tonality of auditory stimuli (both piano sounds and voice) were kept steady across trials. The auditory stimulus was reproduced during the projection of the visual stimuli. The infant could hear the sounds through two speakers settled on the right and left of the experimental room. No other stimuli were available inside the experimental room, which was lit by a dim light above the infant's head.
Experimental Setting and Procedure
All equipment (piano, tape-recorder and slideprojectors) were set in a backstage room, where the researcher and her assistants could attend to the infants' behavior in the experimental room through a one-way mirror. A dim light above the infants' head gave the possibility of video recording infants' face expressions and their body reactions.
Infants' behavior was recorded in a Video -Camera (Panasonic NV MS4 S-VHS) that was settled in front of the infant's seat and out of the infant's view.
The total duration of each trial was 12 seconds. Sound was produced 4 seconds after the images had been projected. We measured infants' attention to the visual stimuli, immediately after the sound was heard.
Conditions with identical vs. non-identical objects: in Condition with identical objects-piano sounds, infants attended to 6 trials. Each trial consisted of a pair of images representing identical objects (e.g. 2 balls at the left visual field of the infant and 3 balls at the right visual field of the infant). The two simultaneously projected slides varied only in numerosity of the represented objects. Four seconds after the images were projected, one, two or three piano sounds were played for 3 seconds.
In Condition with non-identical objects-piano sounds, infants also attended to 6 trials. In this Condition, the projected objects varied both in shape (non-identical objects) and in numerosity (e.g. 1 ball at their right visual field and 3 rattles at their left visual field). Four seconds after the images were projected, one, two or three piano sounds were played for 3 seconds.
Conditions with objects -face and piano / voice stimuli: in Condition with objects and mother's face -piano sounds / mother's voice, infants attended to 8 trials. The face of the infant's mother was projected at one side of the visual field, while at the other side of the visual field two or three identical objects were presented. In half of the trials, one, two or three piano sounds were heard and in the other half trials mother's recorded voice was heard singing one, two or three La syllables.
In Condition with objects and unfamiliar facepiano sounds / stranger woman's voice, there were also 8 trials. In this Condition, an unknown woman's face and voice were presented, similarly to the Condition with mother's face/voice. In order to control for possible effects of habituation to the stimuli (which could lead infants to look away from the matching stimuli), the presentation order of Conditions was counterbalanced; namely, half of the infants attended to the social stimuli first and half of the infants attended first to the non-social stimuli. In all Conditions, projection of the visual stimuli was counterbalanced across trials, on the basis of the right / left and bottom / top visual field of the infants; namely, in half trials of each Condition infants could see a numerical configuration of object-like stimuli on their right visual field and in half trials they could see the same numerical
Conditions with objects and piano sounds
Conditions with objects -face and piano sounds / voice * * in these Conditions the one visual stimulus was always the face configuration on their left visual field. In each Condition, visual stimuli were projected randomly across trials. In all trials, a numerical matching of the visual-auditory stimuli could be perceived (e.g. two balls and 3 rattles -accompanied by two sounds). In addition, for each numerosity we projected two possible combinations, in order to control for sound matching to small or bigger numerosity (see Figure 1). Each infant would look at a randomly selected combination of visualauditory stimuli. Consequently, each infant attended to 28 combinations of stimuli, in a total duration of 12-15 minutes. Not all combinations of stimuli presentation could be used, because, if so, infants would attend to too many trials, and in that case, as we know from relative research, young infants would get easily tired.
Microanalysis of infant behavior
Intermodal coordination was assessed by means of preferential looking technique (the visual stimuli were projected simultaneously, and the infant chose to look at one or another). On one hand, this method is considered to be the most appropriate for investigation of intermodal perception (see Starkey et al. 1983); on the other hand, the use of preferential looking technique constrains the influence of memory in a perceptual task.
The video-taped infant behaviours were microanalyzed by the use of the computer program Logger (Macleod, Morse & Burford, 1993). The particular software is compatible with Macintosh Quadra 650 that is connected with a video (Panasonic S-VHS VCR AG-7355) and analyzes the recorded image (behavior) in 1/25 seconds.
Statistical analysis
Duration of infants' attention to the visual stimuli was modified into seconds through Excel and then the data were inserted into SPSS (Statistical Package for Social Sciences) in order to be statistically analyzed.
Success in the particular experimental task was considered to be the state where, immediately after the sound and while the visual stimuli were still projected, the infant preferred to look at the visual stimulus numerically corresponding to the sound (e.g. after one sound of La, the infant looked longer at the visual stimulus representing one object than at the visual stimulus representing two objects). On the other hand, failure was considered to be the state where, immediately after the sound, the infant preferred to look at the visual stimulus numerically non-corresponding to the sound (e.g. after one sound of La, the infant looks longer at the visual stimulus representing three objects than at the stimulus representing one object). In our study, 78 infants attended to 28 trials. In every trial each infant had more than one successes or failures. The corresponding times (seconds) were added and the result was a total time of success and a total time of failure. In the final analysis of data, time of success was the total mean time of infant attention towards the visual stimuli which numerically matched to the auditory stimuli. Time of failure was the total mean time of infant attention towards the visual stimuli that did not match to the number of sounds.
The dependent variable (indicated at the results as mean time of success tendency) is the difference of total failure time from total success time. This variable uses all responses concerning infant visual attention at the stimuli. Thus, we got a clear picture of the tendency of success or failure. More specifically, positive mean time of infant attention (above zero) indicates success, whereas negative mean time (below zero) indicates failure in the particular experimental task. To search for mean differences in success tendency, we used Ttest for independent samples or ANOVA. When more explanatory variables (independent variables that explained a useful interaction) were inserted in the data analysis, we used general linear models (GLM 4.0; see McCullagh & Nelder, 1989).
The reliability test showed a positive correlation between the two observers (Pearson r=0.94, p<.001). The regression coefficient between the two observers approaches the ideal score 1 (β= 0.96, SE=0.032, p<.001).
Infants' attention and orienting responses
Presentation order of experimental Conditions did not affect infants' duration of attention [independent sample T-test, t (1203)= -.111, p>.05]. Namely, either infants first attended to Conditions with objects, or they first attended to Conditions with objects and social stimuli, infants' success tendency was not influenced (see Table 1).
Analysis of data in all Conditions showed that 43,2% of looking behaviours were successful (infants looked to the numerically matching stimulus) whereas 31,6% of looking behaviours were unsuccessful (infants looked to the numerically non-matching stimulus). Binomial Test showed that the above result is statistically significant at level .05. The rest 25,2% of infant reactions concerned looking away from the stimuli, for example, looking at the mother, around them or nowhere in particular. The statistical analysis was performed on the 74,8% of the infants' looking behaviours (N=1206). In Table 2 we present infants' orienting responses by Condition [x² (3) =6,143, p>.05].
In the statistical analysis that follows, as it has been already explained in the Method section, we considered as dependent variable the mean time of success tendency, which represents mean time of infant attention to matching stimuli subtracted from mean time of attention to mismatching stimuli. Consequently, mean time above zero Table 1 Mean time of success tendency by presentation order of Conditions indicates longer attention to the matching stimuli, while mean time below zero indicates less attention to the matching stimuli. We didn't insert the non-looking at the stimuli responses (25,2%) in the statistical analysis, because we initially defined a standard criterion which would help us focus on intermodal perception of numerosity: we would measure the mean difference of total failure time (how long infants looked at the non-matching stimulus) from total success time (how long infants looked at the matching stimulus). Besides, from similar researches we know that 1 out of 4 infants get irritable or fatigued in a way that they do not look at the stimuli (Moore et al., 1987). Mix et al. (1997), in a replication of Moore et al. (1987), found no differences in results when they inserted the 25% (depicting the non-looking behaviours) in statistical analysis (at the particular research, infants continued to look longer at the nonmatching stimulus).
Analysis in Conditions with objects -piano sounds
We found a difference between mean times of success tendency when we compared data in Condition with identical objects and in Condition with non-identical objects, independent sample Ttest, t (489)= 2.205, p<.05. More specifically, infants looked longer at the numerically matching visual stimuli when identical objects were presented (see Table 3). It seems that in the particular experimental task, similarity of visual stimuli facilitated infants' intermodal perception of numerosity.
We also found an age x gender interaction, General Linear Model, Univariate ANOVA, F (5, 491) = 4.675, p< .05 (see Figure 2). The interaction derives from the significant difference between mean times of success of boys and girls at 5 months, independent sample T-test, t (166) = -2.77, p<.01. It seems that boys at 5 months fail to intermodally match the stimuli on the basis of numerosity, while girls at 5 months seem to succeed in this experimental task. Later in development (at 7 and 9 months), boys seem to catch up with girls in the particular numerical task, while girls show a rather stable performance in time, with a slight decline at 9 months.
Analysis in Conditions with objects -face and piano / voice sounds
In Conditions with socially salient stimuli we found no difference in mean times of success Infants' intermodal perception of numerosity in an experimental study N 407 tendency between Condition with mother's facevoice and Condition with the unknown woman's face-voice the (see Table 3). This result indicates that infants' duration of attention to the matching stimuli was approximately the same, either they were presented with their mother's face or they were presented with the face of a female stranger. Namely, familiarity or unfamiliarity of the face did not affect infants' looking behaviours.
Analysis of data by numerosity and quality of auditory stimuli
In Conditions with objects-piano sounds, we found no effect of sound numerosity, One-way ANOVA, F (2, 488) = 2,012, p>.05 (see Table 5).
In Conditions with social stimuli and objects, the face remained always "one" and objects varied in numerosity (e.g. visual stimuli: mother's face -3 balls and auditory stimulus: 3 sounds). When infants heard to two or three sounds, they failed to detect numerical invariant of the visual-auditory stimuli, namely infants preferred to look at the face, rather than at the numerically matching stimulus, one-way ANOVA, F (2, 712) = 10.291, p<.001 (see Table 5). This result implies that the face, as a socially salient stimulus, attracted infants' attention in a way that they were distracted from intermodally matching the stimuli on the basis of their common numerosity, LSD post hoc test: Conditions 3 & 4, 1 sound -2 sounds, mean difference= . 83,SE= .21,mean difference= .72,SE= .21,p< .01).
In Conditions with objects -face and piano / voice, sound quality (piano sounds, mother's voice or stranger woman's voice) did not affect infants' duration of attention to the stimuli [ANOVA, F (2, 712)=2.433, p>.05].
Discussion
Given the complexity of the issue of number perception in early infancy, in this study we tried to investigate some specific aspects of this human developing ability. The aim was to investigate possible developmental tendencies in perception of numerosity from 5 to 9 months, by means of preferential looking technique, using identical and non-identical, as well as social and non-social visual and auditory stimuli. Moreover we were interested in searching for possible age and gender differences and in detecting whether socially salient stimuli (face/voice) affect the perception of numerical invariant across different modalities.
In the present study, infants managed to intermodally detect the common numerosity of object-like stimuli and piano sounds. In agreement with research of Cohen & Marks (2002), similarity of the shape of projected objects facilitated infants' attention to the numerical invariant. Success Infants' intermodal perception of numerosity in an experimental study N 409 Table 4 Mean times (seconds) of success tendency by age x gender tendency in the particular task was higher in Conditions with identical objects, than in Conditions with non-identical objects. In Conditions with social stimuli, infants' preferential looking was not influenced by the familiar / unfamiliar dimension of stimuli. Either infants were presented with the mother's or with the unknown woman's face, fixation of attention was not significantly influenced. Moreover, in our sample, the quality of the auditory stimulus (voice vs. piano sound) did not affect infants' looking behaviours. In our sample, 5-month-old girls intermodally detected the numerical invariant of the objectstimuli. However, boys at 5 months failed to detect numerosity when object-stimuli were projected, but at 7 and 9 months they managed to abstract the numerical invariant. Antell and Keating (1983) had also found a gender difference in fixation time to the stimuli during the habituation phase -girls did better than boys in a visual perception numerical task with small number stimuli. This difference was explained by hypothesizing a relation between habituation and gender, and not between gender and numerical perception. Moreover, the above-mentioned study concerned detection of numerosity through one modality.
According to Golombok & Fivush (1994) there are no computational differences between boys' and girls' numerical ability. Gender differences might be attributed to a different developmental course of the particular ability. According to our findings, development of perception of numerosity from 7 to 9 months seems to progressively counterbalance the gender differences that are observed at 5 months. Of course, this finding should be further investigated in a longitudinal study, where the course of development could be more evidently described.
In our study, girls seem to be attracted by the numerical correspondence between objects and mechanical sounds at an earlier developmental phase (5 months), than boys (7 months). This might be partially explained by the findings concerning sexual dimorphism documented in humans (see Connellan, Baron-Cohen, Wheelwright, Bakti & Ahluwalia, 2000). It has been found that male neonates show a stronger interest in the physical -mechanical stimuli than girls do. We can assume that boys at 5 months fail to abstract the numerical invariant between objects and mechanical sounds, because they are attracted by the physical quality of the stimuli. At later developmental stages (in our sample, at 7 and 9 months), male infants seem to become able to concentrate on more abstract perceptual properties, such as the numerical invariant of the visual-auditory stimuli.
Relatively to the developmental onset of the particular infant ability, our study confirms the assumption that intermodal perception is present at 7 months (see Starkey et al., 1990. Lewkowicz, 2000. This early infant ability may reflect a tendency to match different stimuli on the basis of a more abstract property, such as numerical invariant. The finding that 5-month-old female infants can intermodally detect numerical invariant shows that intermodal perception is pre-symbolic and direct, and that it may precede representational processing. This finding is consistent with Walker-Andrews' result (1994), who had found that 4month-old infants could detect amodal invariants, even if they had little or no experience.
Amodal invariants are perceptual cues that are tied to the structural properties of an action or event and are not specific to a particular sensory modality (Patterson & Werker, 2002). Young infants match audio-visual events based on temporal synchrony, duration, rate, affective information in the face and voice. For example, rhythm is an amodal invariant, for it can be detected by listening to a sound or watching its visible effect. Amodal relations are context free and can be perceived directly (Bahrick, 2001).
In our study we found that face, as a socially salient stimulus, affected infants' tendency to intermodally perceive the common numerosity of the stimuli. According to Feigenson, Carey & Spelke (2002), social stimuli have a stronger effect on infants' attention than object-like stimuli do. Infants in our study looked longer at the female face than at the object-like stimuli, even if they heard two or three piano sounds. This finding is in accordance with the finding of Feigenson et al. (2002), implying that social stimuli, being more complicated than objects, affect infants' attention, since more complexity acquires more time to be perceptually elaborated. Social stimuli attract infants' attention so that infants look longer at them, regardless of their ability to intermodally detect numerical invariant. Moreover, in our sample, infants' attention to the visual stimuli was not differentiated upon the dimension familiar vs. unfamiliar face. This finding is consistent with relative research indicating that preference for complex and unfamiliar face arises later in development (at about 9 months) (see Sherrod, 1979). Tomasello (1995) has also pointed out the significant developmental swift in the age of nine months.
In Conditions with face -voice, girls -compared to boys-seemed to look longer at the stimuli. Relative studies with younger infants have showed that female neonates present a stronger interest in the face than male infants (Connellan et al., 2000). It seems that, in the particular task, girls' attention is focused on the qualitative (social vs. non-social) discrimination of the visual-auditory stimuli.
Overall, in our study, regardless of gender differences, it seems that by 9 months, the emergent symbolic system facilitates infants to show a preference for intermodally matched stimuli on the basis of a more abstract property, such as numerosity. According to Bahrick (2001), intermodal perception develops in the context of increasing specificity. At the same time, adults' counting system also encourages this tendency for preference. Infants' attention gets more selective with time, as it is adapted to more socially imposed prototypes.
Infants intermodally abstract the global information of the events prior to nested properties or relations of the stimuli (Bahrick, 2001). Early infant perception of numerosity seems to be related to an immediate amodal grasping of number as a whole (subitizing). On the other hand, face-like stimuli seem to attract infants' attention compared to object-like stimuli. More-over, face perception seems to be a strong paradigm of infants' detection of amodal invariants (Patterson and Werker, 2002). Nevertheless, the question still remains: to what degree can we imply the existence of two discrete perceptual systems?
It seems that different mechanisms lay under face perception and number perception. Nevertheless, object perception, face perception and number perception seem to have something in common: all three perceptual systems are functioning very early in human life. Moreover they share the fruits of intermodal perception. Early number perception would be more precisely described as an amodal function of human mind. We rather deal with three separate but interconnected systems, an idea which is congruent with a domain-specific development of human mind (see Karmiloff-Smith, 1998).
According to Spitz (1959), infants' experience is global and kinesthetic. Cognitions, actions and perceptions are directly experienced by infants in terms of shape, intensity, temporal pattern, vitality affects, categorical affects and hedonic tones (Stern, 1985). Developmental psychologists should perhaps try out new methodologies and designs that would take into account the synthesis rather than the analysis of specific infant abilities. Longitudinal naturalistic studies of intermodal perception of numerosity could clarify infants' early perceptual abilities. Franchak, Kretch, Soska & Adolph (2011) have proposed a head-mounted eye tracking technique which allows researches to investigate infants' exploratory visual behavior at their home environment. In this way, we could better understand infants' cognition in the real world and from a holistic point of view. In concluding, it seems that early intermodal perceptual ability to discriminate between social and non-social stimuli and amodal detection of small number sets are of vital significance in infant development. | 2021-05-17T00:03:23.882Z | 2020-10-15T00:00:00.000 | {
"year": 2020,
"sha1": "b391137751cca2993f2ade27e87478b5a13d59d9",
"oa_license": null,
"oa_url": "https://ejournals.epublishing.ekt.gr/index.php/psychology/article/download/23697/19822",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6366af158d4f6f426c09eca974efef8ce72c79ff",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
123249240 | pes2o/s2orc | v3-fos-license | Joint Diagnostic of the Surface Air Temperature in Southern South America and the Madden–Julian Oscillation
The objectiveof this research is to explore the relationshipbetween maximum and minimumtemperatures, daily precipitation, and the Madden–Julian oscillation (MJO). It was found that the different phases of the MJO show a consistent signal on winter temperature variability and precipitation in southeastern South America. Additionally, this paper explores the spatial–temporal variations of mutual information and joint entropy between temperature and the MJO. A defined spatial pattern was observed with an increased signal in northeastern Argentina and southern Brazil. In the local mutual information analysis, periods in which the mutual information doubled the average values were observed over the entire region. These results indicate that these connections can be usedto forecast winter temperatures with a better skill score in situations where both variables covary.
Introduction
The Madden-Julian oscillation (MJO) is the dominant phenomenon in the intraseasonal variability of the tropical atmosphere.This oscillation is responsible for the variability in these regions and its influence extends over important atmospheric and oceanic parameters.The typical MJO cycle is approximately 30-60 days long (Madden and Julian 1971, 1972, 1994;Zhang 2005).The influence of atmospheric circulation anomalies extends from the region under the direct influence of this phenomenon, affecting global circulation patterns.Diverse studies have been developed that relate the interaction between variations within the synoptic scale and the MJO.Matthews and Kiladis (1999) studied the interaction between high-frequency transient disturbances and convection with the MJO.They found evidence that the propagation of high-frequency waves in the Indian Ocean could be of relevance for the onset of convective anomalies on an intraseasonal scale.For tropical-extratropical interactions, Matthews and Meredith (2004) and Zhou and Miller (2005) found that the variability of the atmospheric southern annular pattern on an intraseasonal scale shows a relation with the atmospheric variability related to the MJO during the southern winter.
Additionally, in an analysis of the relation of the MJO with temperature variability, Vecchi and Bond (2004) found a significant statistical signal and spatial coherence between the atmospheric variability at high latitudes and the convective intertropical variability at the intraseasonal scale during the Northern Hemisphere winter.Because of this coherence, a connection was inferred between the MJO and surface temperature variations at high latitudes through the effects that the MJO induces on humidity and anomalies in geopotential height in the middle troposphere.These results agree with those of Higgins and Mo (1997), Mo andHiggins (1998), andJones (2000).Additionally, Minetti and Vargas (1997) demonstrated the existence of a modulation in the intraseasonal fluctuations of temperature anomalies in the Argentinean tropics due to the different El Nin ˜o-Southern Oscillation (ENSO) phases.
This paper explores the link between temperature anomalies in southern South America and the MJO as well as its possible implications on the joint diagnosis.Section 2 presents the daily data used for the study.The spatial distribution of the relations between the MJO and maximum and minimum temperature anomalies is presented in section 3. Joint entropy and mutual information are presented in section 4. Section 5 shows the spatiotemporal behavior of the joint entropy and mutual information.Finally, section 6 summarizes our main conclusions.
Data
Series of daily maximum and minimum temperatures at 53 stations provided by the National Weather Service of Argentina and the Claris Project were selected.These series ensured a large quantity of data to produce a stable estimation of the clusters over the daily temperatures (in this case, more than 20 000 values).A representative geographic distribution of stations was selected to include as many climatic regions of southern South America as possible, as well as to cover a wide latitudinal selection (covering 238-558S).
The Real-time Multivariate MJO series 1 and 2 (RMM1 and RMM2) bivariate indexes were used to determine the possible relationship between the MJO and atmospheric circulation in southeastern South America.These indexes are based on the first two empirical orthogonal functions (EOFs) of the combined fields of the zonal wind speeds at 850 and 200 hPa, and the outgoing longwave radiation (OLR), averaged between 158N and 158S.The projection of this daily information on the EOFs with the annual cycle and interannual variability filtered out produces a temporal series of each main component.These reproduce the variability of the intraseasonal scale.The two main associated components that make up the multivariate index are called RMM1 and RMM2.More details concerning the construction of this index can be found in Wheeler and Hendon (2004).Additionally, the index is available online (http://www.bom.gov.au/bmrc/clfor/cfstaff/matw/maproom/RMM/).
Based on this description, the MJO is divided into eight phases, each with an average duration of approximately 6 days.As depicted in Wheeler and Hendon (2004), in phase 1, convection of a decaying MJO event is present in the central Pacific, while enhanced convection of a growing event is evident over Africa and the western Indian Ocean.Over subsequent phases convection in the Indian Ocean builds and moves to the east.In general, the trajectory of this bivariate index is represented by orbits around the origin, which evidences a systematic propagation toward the east of the MJO.A greater amplitude of these orbits means strong MJO cycles.For periods during which the MJO's signal is weak, this is reflected as random displacements near the origin (Wheeler and Hendon 2004).
The relation between surface temperature and the MJO
Average fields of maximum and minimum temperature anomalies associated with each of the eight MJO phases showing a coherent signal were constructed to evaluate the relationship between the MJO and the surface temperatures in the region.These fields considered the events in which the MJO was active.The MJO was defined as active when the values of the amplitude of the index outperformed the upper tercile.
Additionally, given that the greatest intraseasonal signal in the region is observed during the cold season (Naumann 2010), the average fields were calculated during the southern winter, defined as the months of June-August (JJA).
Figures 1 and 2 show the average fields associated with the maximum and minimum temperatures for each MJO phase.These show that the temperature signals associated with the MJO are spatially coherent.For the minimum temperature, for example, warm anomalies are observed over the entire northern region of the domain for phases 5 and 6.For phases 2 and 3, the inverse pattern of behavior is observed, with cold anomalies in nearly the entire region, except southeastern Brazil.It is worth noting that the observed anomalies, in both the warming and cooling phases, are statistically significant (estimated with a normal test).They also exceed values of 18C in almost all regions, even reaching average values of 48C.
A coherent pattern of behavior also exists in what is observed for the maximum temperature (Fig. 1) and precipitation (Naumann 2010), principally in the northeastern portion of the region.In phases 3 and 5, negative maximum temperature anomalies (positive precipitation anomalies) were observed in the east-central portion of the region, while positive (negative) anomalies exist in Patagonia and the central Andes.Phases 1, 4, and 7 are characterized by the opposite behavior, with a deficit in precipitation in southeastern Brazil and northeastern Argentina.Excesses also stand out in phase 1 in Patagonia and the central mountain range.
Joint entropy and mutual information
When two discrete variables, x and y, are considered for the same time t, it is possible to measure the degree of uncertainty or the information associated between them (Shannon 1948(Shannon , 1950)).The quantity that measures these properties is joint entropy H(x, y).If x and y can assume values m 1 and m 2 , respectively, then the joint entropy can be calculated as follows: where p i,j represents the probability that the variable x is in state i while, simultaneously, the variable y is in the state j.
Joint entropy varies from minimum theoretical entropy (H 5 0) to log(m 1 ) 1 log(m 2 ).The relationship between the joint entropy and the individual entropies is H(x, y) # H(x) 1 H(y). (2) This relationship shows that joint entropy is always less than the sum of the entropies of each variable.The equality is only valid for the case in which variables x and y are independent.Additionally, it is possible to define mutual information (I) as a measure of the information shared by two variables.This quantity can be defined as a function of the individual and joint entropies of two variables, x and y, such that I(x, y) 5 H(x) 1 H(y) À H(x, y). (3) If both variables are independent, the joint entropy is equal to the sum of the individual entropies, and, consequently, the mutual information is zero.
Mutual information between temperature and the Madden-Julian oscillation
Joint entropy and mutual information are two variables that measure the degree of shared information between variables.In the case of mutual information, this represents a measure of the reduction of entropy of a variable due to the effects of another.In other words, this represents the amount of information that two or more variables have in common.Because of this, high mutual information values indicate a large reduction of the uncertainty, while lesser I values represent a small reduction in these uncertainties.
This measure of correlation between two variables can be used to examine the relationship between daily temperature and the MJO.Following this goal, the mutual information between the discrete series as a result of the bivariate classification of temperature and the phases associated with the bivariate index that represents the MJO was analyzed.Classification of the daily series of temperature produced four groups.The names of these groups were assigned according to their thermal properties (which reflect aspects of humidity, cloudiness, orography, etc.) and their association with precipitation on annual and daily scales.The groups are warm, wet, cold, and dry.The groups differentiate weather types, which represent circulation patterns.More details on bivariate temperature classification can be found in Vargas and Naumann (2008).
In this way Fig. 3 shows the spatial distribution of the mutual information in the daily temperature series discretized using cluster analysis and the MJO.Here, a southern gradient of this variable is observed with maxima in the tropical regions.Additionally, there is a maximum relationship between the two variables analyzed in southeastern Brazil and in part of the Argentinean northeast.
This result is highly related to the strengthening of convection in the equatorial Indian and Pacific Oceans.Grimm and Silva Dias (1995) found a considerable consistency among the circulation patterns (more precisely Rossby waves forced by tropical convention) observed at intraseasonal (MJO) and interannual (ENSO) time scales.Similarly, the authors found a connection in the dynamic mechanisms among 30-60-day oscillations and the southern Atlantic convergence zone (SACZ).
These results indicate that the mutual information between temperature and the Madden-Julian oscillation is not high compared with the maximum values that can reach the variable I, that is, the sum of the individual entropies [Eq.( 3)].However, I reaches up to 1 bit of information in southeastern Brazil.Also, the spatial distribution of the signal is coherent with the physical processes associated with teleconnections, with the intraseasonal oscillation due to equatorial convection.
However, because the results in Fig. 3 refer to average values of mutual information and because the amplitude of the MJO varies with time, periods in which the temperature and the MJO covary to a greater or lesser extent are expected.Consequently, it is of interest to analyze the local variations of these correlations and their inferences on the overall diagnosis.
Under certain conditions, principally in the case of transitory dynamic phenomena, the analysis of average entropy fails to detect correlations existing among variables.Also, the standardized average entropy of the climate system is greater than 0.9 (Naumann and Vargas 2009).This means that, as a main property, the average predictability of the system is low (around 10%).However, for practical applications, the average values of the different properties that define the predictability of the system are, in general, not of great usefulness.However, a specific prediction based on a finite sequence of longitude N (conditional and dynamic entropy), or on the prediction given by the relation between two variables that share information in specific periods of time ( joint entropy), could be of great importance.
Because joint entropy is a measure of dependence between two variables, it is possible to detect periods in which this measure decreases locally.This implies coherence between the processes of the two variables.Consequently, to analyze the local behavior that describes the relation between the discretized temperature and the MJO, the joint entropy and the mutual information on 30-day mobile time windows (Figs. 4 and 5) were calculated.These show for the entire region in general that periods exist when the mutual information doubles the average values, and the standardized joint entropy decreases from values of 0.8 to less than 0.5.
Additionally, the mutual information tends toward a defined seasonal pattern of behavior in this region.This refers to maximums of mutual information in the autumn and winter periods and minimums during the summer.These results match those found by Naumann (2010), where the greatest intraseasonal temperature signal is observed for winter.These results imply that for these periods, it is possible to make a diagnosis and prognosis of the temperature behavior in the region based on the analysis of the MJO variability.
Conclusions
There is a coherent signal among the distinct phases of the MJO with maximum and minimum temperatures in southeastern South America.
Because the joint entropy is a measure of the dependence between two variables, it is possible to detect periods in which this measure decreases locally, which implies a coherence between the processes of the two variables.For this reason, the joint entropy and the mutual information between the temperature and the MJO were calculated.Here, it was observed for the entire region in general that periods exist when the mutual information doubles the average values and the joint entropy decreases from values of 0.8 to less than 0.5.
Likewise, in the localities analyzed, the mutual information tends toward a defined seasonal pattern of behavior.This refers to the maximums of mutual information in the autumn and winter seasons and the minimums during summer.These results imply that for these periods it is possible to make a diagnosis and prognosis of the temperature's behavior in the region based on the analysis of the MJO variability.Moreover, because it is now possible to predict the MJO with significant accuracy up to 15-17 days (Seo et al. 2009), it is possible to use this information to infer the temperature prognosis in the region at that time scale.
Finally, it is clear that it is necessary to investigate the local conditions in which the MJO could be a good predictor of temperatures in South America.This can be done through analysis of the joint and conditional entropy of the different joint processes that lead to extreme thermal events in the region.
FIG. 1 .
FIG. 1.Average fields of maximum winter (JJA) temperature anomalies with amplitude greater than the upper tercile for MJO phases 1-8.The dots represent the stations used in the analysis.
FIG. 3 .
FIG. 3. Mutual information between the bivariate classification oftemperature and the MJO.
FIG. 4. (a) Standardized joint entropy and (b) mutual information between the bivariate classification of temperature in Corrientes, Argentina (27.438S, 58.748W), and the MJO and its respective average (dotted line). | 2019-04-20T13:05:14.428Z | 2010-08-01T00:00:00.000 | {
"year": 2010,
"sha1": "ec19480ccf5a5ae2a6294748d131fc4db36a5016",
"oa_license": "CCBYNCSA",
"oa_url": "https://ri.conicet.gov.ar/bitstream/11336/16537/1/CONICET_Digital_Nro.20308.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "919130f466499a3ed639400e19608ad74d00456b",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
53958854 | pes2o/s2orc | v3-fos-license | Analysis of Stochastic Reliability Characteristics of a Repairable 2-out-of-3 System with Minimal Repair at Failure
In this paper, we study the reliability and availability characteristics of a repairable 2-out-of-3 system. Failure and repair times are assumed exponential. The explicit expressions of reliability and availability characteristics such as mean time to system failure (MTSF), steady-state availability, busy period and profit function are derived using Kolmogorov’s forward equations method. Various cases are analyzed graphically to investigate the impact of system parameters on MTSF, availability, busy period and profit function.
Introduction
During operation, the strengths of systems are gradually deteriorated, until some point of deterioration failure, or other types of failures.Maintenance policies are vital in the analysis of deterioration and deteriorating systems as they help in improving reliability and availability of the systems.Maintenance models assume perfect repair (as good as new), minimal repair (as bad as old) and imperfect repair which between perfect and minimal repair.There are systems of three/four units in which two/three units are sufficient to perform the entire function of the system.Examples of such systems are 2-out-of-3, 2-outof-4, or 3-out-of-4 redundant systems.These systems have wide application in the real world especially in industries.Many research results have been reported on reliability of 2-out-of-3 redundant systems.For example, [1] analyzed reliability models for 2-out-of-3 redundant system subject to conditional arrival time of the server.Reference [2] presented reliability and economic analysis of 2-out-of-3 redundant system with priority to repair, [3] studied MTSF and cost effectiveness of 2-out-of-3 cold standby system with probability of repair and inspection while [4] examined the cost benefit analysis of series systems with cold standby components and repairable service station.Reference [5,6] examined the cost analysis of two unit cold standby system involving preventive maintenance respectively.Reference [7] studied the cost and probabilistic analysis of series system with mixed standby components, and [8] studied cost benefit analysis of series systems with warm standby components involving general repair time where the server is not subject to breakdowns.The failure time and repair time are assumed to have exponential distribution.Measures of system effectiveness such MTSF, steady-state availability, busy period and profit function are obtained.Reference [9] studied availability of a system with different repair options, while [10] evaluate the reliability of network flows with stochastic capacity and cost constraint.
In this paper, a 2-out-of-3 redundant system is constructed and derived its corresponding mathematical models.The main contribution of this paper is two fold.First, is to develop the explicit expressions for MTSF, system availability, busy period and profit function.The second is to perform a parametric investigation of various system parameters on MTSF, system availability and profit function and capture their effect on MTSF, availability, busy period and profit function.
The rest of the paper is organized as follows.Section 2 the notations, assumptions of the study, and the states of the system.Section 3 gives the states of the system.Sec-tion 4 deals with models formulation.The results of our numerical simulations are presented and discussed in Section 5.The paper is concluded in Section 6. : Unit in full operation/reduced capacity/ failure/ standby.
Notations and Assumptions
The system is 2-out-of-3 system.
2) The system work in a reduced capacity before failure.
3) The systems have three states: normal, reduced and failure.
4) Unit failure and repair rates are constant.5) Repair is as bad as old (minimal).6) failure and repair time are assumed exponential.7) The system failed when two units have failed.8) The system is under the attention of two repairmen., ,
States of the System
, , , ,
Mean Time to System Failure for System
Let P t be the probability row vector at time t , then the initial conditions for this problem are as follows: 0 0 , 0 , 0 , 0 , 0 , 0 , 0 1, 0, 0, 0, 0, 0, 0 we obtain the following system of differential equations: The above system of differential equations can be written in matrix form as where It is difficult to evaluate the transient solutions, hence we follow [4][5][6], the procedure to develop the explicit expression for MTSF is to delete the seventh row and column of matrix T and take the transpose to produce a new matrix, say A. The expected time to reach an absorbing state is obtained from where
System Availability Analysis
For the availability case of Figure 1 using the initial condition in Section 4.1 for this system, 0 0 , 0 , 0 , 0 , 0 , 0 1, 0, 0, 0, 0, 0, 0 The system of differential equations in (1) for the system above can be expressed in matrix form as: Let be the time to failure of the system.The steady-state availability is given by In steady state, the derivatives of state probabilities become zero, thus (2) becomes 0 AP (5) which in matrix form is using the normalizing condition we substitute (6) in the last row of ( 5) following [4][5][6].
The resulting matrix is We solve the system of linear equations in matrix above to obtain the state probabilities Computer programme (MATLAB) is used to develop the explicit expressions for the .The expression for the is lengthy to be shown here.
Busy Period Analysis
Using the same initial condition in Section 4.1 above as for the reliability case 0 0 , 0 , 0 , 0 , 0 , 0 1, 0, 0, 0, 0, 0, 0 and ( 5) and ( 6) the busy period is obtained as follows: In the steady state, the derivatives of the state probabilities become zero and this will enable us to compute steady state busy period due to failure: The system of differential equations in (1) for the system above can be expressed in matrix form as: Let B be the probability that the repair man is busy either repairing the failed unit or exchanging the degraded units with new ones.The steady-state busy period is given by In steady state, the derivatives of state probabilities become zero, thus (2) becomes which in matrix form is We substitute (6) in the last row of (5) (see [4][5][6]).The resulting matrix is We solve the system of linear equations in matrix above to obtain the state probabilities Computer programme (MATLAB) is used to develop the explicit expressions for the .The expression for the is lengthy to be shown here.
Profit Analysis
The system/units are subjected to corrective maintenance at failure as can be observed in states 4, 5 and 6.From Figure 1, the repairman is busy performing corrective maintenance action to the units at failure in states 4, 5 and 6.According to [4,5], the expected profit per unit time incurred to the system in the steady-state is given by: Profit = total revenue generated -cost incurred for repairing the failed units.
where : is the profit incurred to the system; PF 0 C C : is the revenue per unit up time of the system; 1 : is the accumulated cost per unit time which the system is under repair and unit exchange.
Results and Discussions
In this section, we numerically obtained the results for mean time to system failure, system availability, busy period and profit function for all the developed models.
For the model analysis, the following set of parameters values are fixed throughout the simulations for consistency: Case I: 1 0.
1 U 2 U 3 3
: Failure rate of unit and U simultaneously.
Figure 20. Effect of 3 on busy period.
Results of MTSF, steady-state availability, profit and busy period with respect to 1 Figures 10, 15, and 20 and Fi res 11, 6 and 21 respectively.It is evident from Figures 10 and 15 that the steady-state availability and profit decreases as 3 .Simulation results of steady-state availability, profit and busy period can be observed in Fig- ures 11, 16 and 21.In Figures 11 and 16, the steadystate availability and profit increases as 3 from Figure 13. | 2018-11-30T12:42:31.614Z | 2013-07-23T00:00:00.000 | {
"year": 2013,
"sha1": "e337d7a7f8b2700cf97f5afa396c84779920a6c1",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=35142",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e337d7a7f8b2700cf97f5afa396c84779920a6c1",
"s2fieldsofstudy": [
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.